Compare commits

..

46 Commits

Author SHA1 Message Date
Krishan
a2bee2f255
Add via param to hierarchy enpoint (#18070)
### Pull Request Checklist

Implementation of
[MSC4235](https://github.com/matrix-org/matrix-spec-proposals/pull/4235)
as per suggestion in [pull request
17750](https://github.com/element-hq/synapse/pull/17750#issuecomment-2411248598).

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Quentin Gliech <quenting@element.io>
2025-06-30 12:42:14 +00:00
Erik Johnston
3878699df7
Speed up device deletion (#18602)
This is to handle the case of deleting lots of "bot" devices at once.

Reviewable commit-by-commit

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-06-30 11:48:57 +01:00
Travis Ralston
b35c6483d5
Skip processing policy server events through policy server (#18605)
Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
2025-06-30 11:45:23 +01:00
reivilibre
bfb3a6e700
Improve performance of device deletion by adding missing index. (#18582)
<ol>
<li>

Reorder columns in `event_txn_id_device_id_txn_id` index \
This now satisfies the foreign key on `(user_id, device_id)` making
reverse lookups, as needed for device deletions, more efficient.

This improves device deletion performance by on the order of 8 to 10×
on matrix.org.


</li>
</ol>


Rationale:

## On the `event_txn_id_device_id` table:

We currently have this index:
```sql
-- This ensures that there is only one mapping per (room_id, user_id, device_id, txn_id) tuple.
CREATE UNIQUE INDEX IF NOT EXISTS event_txn_id_device_id_txn_id 
    ON event_txn_id_device_id(room_id, user_id, device_id, txn_id);
```

The main way we use this table is
```python
        return await self.db_pool.simple_select_one_onecol(
            table="event_txn_id_device_id",
            keyvalues={
                "room_id": room_id,
                "user_id": user_id,
                "device_id": device_id,
                "txn_id": txn_id,
            },
            retcol="event_id",
            allow_none=True,
            desc="get_event_id_from_transaction_id_and_device_id",
        )
```

But this foreign key is relatively unsupported, making deletions in
the devices table inefficient (full index scan on the above index):
```sql
    FOREIGN KEY (user_id, device_id)
        REFERENCES devices (user_id, device_id) ON DELETE CASCADE
```

I propose re-ordering the columns in that index to: `(user_id,
device_id, room_id, txn_id)` (by replacing it).

That way the foreign key back-check can rely on the prefix of this
index, but it's still useful for the original purpose it was made for.

It doesn't take any extra disk space and does not harm write performance
(because the same amount of writing work needs to be performed).

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-06-30 10:36:12 +01:00
reivilibre
8afea3d51d
Improve docstring on simple_upsert_many. (#18573)
It came up that this was somewhat confusing and an example might help.

So here's an example :)

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-06-30 10:35:23 +01:00
Johannes Marbach
db710cf29b
Add forget_forced_upon_leave capability as per MSC4267 (#18196)
This adds the capability from
https://github.com/matrix-org/matrix-spec-proposals/pull/4267 under an
experimental feature.

Signed-off-by: Johannes Marbach <n0-0ne+github@mailbox.org>
2025-06-27 15:07:24 -05:00
Erik Johnston
de29c13d41
Fix backwards compat for DirectServeJsonResource (#18600)
As that appears in the module API.

Broke in #18595.
2025-06-26 14:05:48 +00:00
Tulir Asokan
434e38941a
Add federated_user_may_invite spam checker callback (#18241)
Co-authored-by: Sebastian Spaeth <Sebastian@SSpaeth.de>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-06-26 12:27:21 +01:00
dependabot[bot]
b1396475c4
Bump base64 from 0.21.7 to 0.22.1 (#18589)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-25 17:22:39 +01:00
dependabot[bot]
b088194f48
Bump docker/build-push-action from 6.17.0 to 6.18.0 (#18497)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-25 17:12:24 +01:00
dependabot[bot]
2f21b27465
Bump pyasn1-modules from 0.4.1 to 0.4.2 (#18495)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-25 17:00:19 +01:00
dependabot[bot]
3807fd42e1
Bump urllib3 from 2.2.2 to 2.5.0 (#18572) 2025-06-25 15:50:11 +01:00
dependabot[bot]
99474e7fdf
Bump sigstore/cosign-installer from 3.8.2 to 3.9.0 (#18588) 2025-06-25 15:49:25 +01:00
dependabot[bot]
ec13ed4169
Bump docker/setup-buildx-action from 3.10.0 to 3.11.1 (#18587) 2025-06-25 15:46:10 +01:00
dependabot[bot]
62b5b0b962
Bump reqwest from 0.12.15 to 0.12.20 (#18590) 2025-06-25 15:45:28 +01:00
Erik Johnston
0779587f9f
Lift pausing on ratelimited requests to http layer (#18595)
When a request gets ratelimited we (optionally) wait ~500ms before
returning to mitigate clients that like to tightloop on request
failures. However, this is currently implemented by pausing request
processing when we check for ratelimits, which might be deep within
request processing, and e.g. while locks are held. Instead, let's hoist
the pause to the very top of the HTTP handler.

Hopefully, this mitigates the issue where a user sending lots of events
to a single room can see their requests time out due to the combination
of the linearizer and the pausing of the request. Instead, they should
see the requests 429 after ~500ms.

The first commit is a refactor to pass the `Clock` to `AsyncResource`,
the second commit is the behavioural change.
2025-06-25 14:32:55 +00:00
Patrick Cloke
0c7d9919fa
Fix registering of background updates for split main/state db (#18509)
The background updates are being registered on an object that is for the
_state_ database, but the actual tables are on the _main_ database. This
just moves them to a different store that can access the right stuff.

I noticed this when trying to do a full schema dump cause I was curious
what has changed since the last one.

Fixes #16054

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-06-25 13:59:18 +01:00
dependabot[bot]
6fabf82f4f
Bump types-opentracing from 2.4.10.6 to 2.4.10.20250622 (#18586)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-24 17:30:36 +01:00
Andrew Morgan
cb259eb206 1.133.0rc1 2025-06-24 11:59:23 +01:00
Andrew Morgan
6791e6e250
Unbreak unit tests with Twisted 25.5.0 by add parsePOSTFormSubmission arg to FakeSite (#18577)
Co-authored-by: anoa's Codex Agent <codex@amorgan.xyz>
2025-06-24 11:52:06 +01:00
V02460
3cabaa84ca
Update PyO3 to version 0.25 (#18578)
Updates `pyo3` to version 0.25.1 and, accordingly, `pyo3-log` to v0.12.4
and `pythonize` to v0.25.0.

PyO3 v0.25 enables Python 3.14 support.
2025-06-23 13:48:07 +01:00
Travis Ralston
74ca7ae720
Add report user API from MSC4260 (#18120)
Co-authored-by: turt2live <1190097+turt2live@users.noreply.github.com>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-06-20 13:02:14 +01:00
Erik Johnston
5102565369
Fixup generated config documentation (#18568)
Somehow its got out of sync, picked up by CI on develop.
2025-06-18 16:40:52 +01:00
Erik Johnston
33e0c25279
Clean up old device_federation_inbox rows (#18546)
Fixes https://github.com/element-hq/synapse/issues/17370
2025-06-18 11:58:31 +00:00
Erik Johnston
73a38384f5 Merge branch 'master' into develop 2025-06-17 15:33:18 +01:00
dependabot[bot]
4a803e8257
Bump dawidd6/action-download-artifact from 9 to 11 (#18556)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-17 13:47:42 +01:00
dependabot[bot]
51dbbbb40f
Bump types-requests from 2.32.0.20250328 to 2.32.4.20250611 (#18558)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-17 13:43:01 +01:00
dependabot[bot]
6363d63822
Bump actions/setup-python from 5.5.0 to 5.6.0 (#18555)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-17 13:42:28 +01:00
Erik Johnston
d1139ebfc1 1.132.0 2025-06-17 13:16:57 +01:00
Erik Johnston
3e571561c9
Fix Cargo.lock after bad merge (#18561)
Broke in #18357
2025-06-17 11:01:32 +01:00
Erik Johnston
a3b80071cd
Always run schema workflow on develop (#18551)
... and release branches, so that we catch any problems that slip trough
PR review.
2025-06-17 10:57:34 +01:00
Erik Johnston
f500c7d982
Speed up MAS token introspection (#18357)
We do this by shoving it into Rust. We believe our python http client is
a bit slow.

Also bumps minimum rust version to 1.81.0, released last September (over
six months ago)

To allow for async Rust, includes some adapters between Tokio in Rust
and the Twisted reactor in Python.
2025-06-16 16:41:35 +01:00
dependabot[bot]
df04931f0b
Bump base64 from 0.21.7 to 0.22.1 (#18559)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-16 16:33:51 +01:00
Kegan Dougal
f56670515b
bugfix: assert we always pass the create event to get_user_power_level (#18545)
The create event is required if there is no PL event, in which case the
creator gets PL100.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-06-13 16:32:24 +00:00
Kegan Dougal
db8a8d33fe
bugfix: calculate the PL for non-creators correctly in v11 rooms (#18547)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-06-13 12:56:39 +01:00
Andrew Morgan
3b94e40cc8
Fix typo of Math.pow, ^ -> ** (#18543) 2025-06-13 11:36:21 +00:00
dependabot[bot]
6b1e3c9c66
Bump requests from 2.32.2 to 2.32.4 (#18533) 2025-06-13 12:34:38 +01:00
Erik Johnston
1709957395
Fix bug where sliding sync ignored room_id_to_include option (#18535)
This was correctly handled for the "fallback" case where the background
updates hadn't finished

---------

Co-authored-by: Eric Eastwood <erice@element.io>
2025-06-13 11:29:23 +01:00
Quentin Gliech
0de7aa9953
Enable flake8-logging and flake8-logging-format rules in Ruff and fix related issues throughout the codebase (#18542)
This can be reviewed commit by commit.

This enables the `flake8-logging` and `flake8-logging-format` rules in
Ruff, as well as logging exception stack traces in a few places where it
makes sense

 - https://docs.astral.sh/ruff/rules/#flake8-logging-log
 - https://docs.astral.sh/ruff/rules/#flake8-logging-format-g

### Linting to avoid pre-formatting log messages

See [`adamchainz/flake8-logging` -> *LOG011 avoid pre-formatting log
messages*](152db2f167/README.rst (log011-avoid-pre-formatting-log-messages))

Practically, this means prefer placeholders (`%s`) over f-strings for
logging.

This is because placeholders are passed as args to loggers, so they can
do special handling of them.
For example, Sentry will record the args separately in their logging
integration:
c15b390dfe/sentry_sdk/integrations/logging.py (L280-L284)

One theoretical small perf benefit is that log levels that aren't
enabled won't get formatted, so it doesn't unnecessarily create
formatted strings
2025-06-13 09:44:18 +02:00
Will Hunt
e4ca593eb6
Log user deactivations (#18541)
One liner to give us more clarity when auditing deactivations of user
accounts.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [ ] Pull request is based on the develop branch
* [ ] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [ ] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-06-12 10:21:39 +00:00
Kegan Dougal
978032141b
bugfix: ensure _get_power_level_for_sender works when there is no PL event (#18534) 2025-06-10 15:11:49 +01:00
dependabot[bot]
142ba5df89
Bump headers from 0.4.0 to 0.4.1 (#18529)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-10 14:38:54 +01:00
Andrew Morgan
eb5dfc19e5 Merge branch 'release-v1.132' into develop 2025-06-10 12:55:36 +01:00
reivilibre
cc6b4980ef Add config doc generation command to lint.sh and add missing config schema. (#18522)
Follows: #17892, #18456

<ol>
<li>

Add config doc generation command to lint.sh 

</li>
<li>

Add missing `user_types` config schema 

</li>
</ol>

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-06-10 12:45:31 +01:00
reivilibre
d5da07703d
Config schema documentation CI: fix not failing when it should (#18528)
Follows: #17892 <!-- -->

<ol>
<li>

Config documentation CI: fix not failing if changes are outstanding 

</li>
</ol>


Shown to work at :
https://github.com/element-hq/synapse/actions/runs/15532406886/job/43724019104?pr=18528

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-06-10 12:44:04 +01:00
reivilibre
96c556081a
Add config doc generation command to lint.sh and add missing config schema. (#18522)
Follows: #17892, #18456

<ol>
<li>

Add config doc generation command to lint.sh 

</li>
<li>

Add missing `user_types` config schema 

</li>
</ol>

---------

Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2025-06-10 12:43:58 +01:00
148 changed files with 3958 additions and 784 deletions

View File

@ -24,13 +24,13 @@ jobs:
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Inspect builder
run: docker buildx inspect
- name: Install Cosign
uses: sigstore/cosign-installer@3454372f43399081ed03b604cb2d021dabca52bb # v3.8.2
uses: sigstore/cosign-installer@fb28c2b6339dcd94da6e4cbcbc5e888961f6f8c3 # v3.9.0
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
@ -72,7 +72,7 @@ jobs:
- name: Build and push all platforms
id: build-and-push
uses: docker/build-push-action@1dc73863535b631f98b2378be8619f83b136f4a0 # v6.17.0
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
push: true
labels: |

View File

@ -14,7 +14,7 @@ jobs:
# There's a 'download artifact' action, but it hasn't been updated for the workflow_run action
# (https://github.com/actions/download-artifact/issues/60) so instead we get this mess:
- name: 📥 Download artifact
uses: dawidd6/action-download-artifact@07ab29fd4a977ae4d2b275087cf67563dfdf0295 # v9
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
with:
workflow: docs-pr.yaml
run_id: ${{ github.event.workflow_run.id }}

View File

@ -61,7 +61,7 @@ jobs:
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
with:
install: true

View File

@ -5,6 +5,9 @@ on:
paths:
- schema/**
- docs/usage/configuration/config_documentation.md
push:
branches: ["develop", "release-*"]
workflow_dispatch:
jobs:
validate-schema:
@ -12,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
- name: Install check-jsonschema
@ -38,7 +41,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/setup-python@8d9ed9ac5c53483de85588cdf95a591a75ab9f55 # v5.5.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
- name: Install PyYAML
@ -51,4 +54,4 @@ jobs:
> docs/usage/configuration/config_documentation.md
- name: Error in case of any differences
# Errors if there are now any modified files (untracked files are ignored).
run: 'git diff || ! git status --porcelain=1 | grep "^ M"'
run: 'git diff --exit-code'

View File

@ -85,7 +85,7 @@ jobs:
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
uses: dtolnay/rust-toolchain@c1678930c21fb233e4987c4ae12158f9125e5762 # 1.81.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
@ -149,7 +149,7 @@ jobs:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
uses: dtolnay/rust-toolchain@c1678930c21fb233e4987c4ae12158f9125e5762 # 1.81.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- name: Setup Poetry
@ -210,7 +210,7 @@ jobs:
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Install Rust
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
uses: dtolnay/rust-toolchain@c1678930c21fb233e4987c4ae12158f9125e5762 # 1.81.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
@ -227,7 +227,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
uses: dtolnay/rust-toolchain@0d72692bcfbf448b1e2afa01a67f71b455a9dcec # 1.86.0
with:
components: clippy
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
@ -247,7 +247,7 @@ jobs:
- name: Install Rust
uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master (rust 1.85.1)
with:
toolchain: nightly-2022-12-01
toolchain: nightly-2025-04-23
components: clippy
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
@ -265,7 +265,7 @@ jobs:
uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master (rust 1.85.1)
with:
# We use nightly so that it correctly groups together imports
toolchain: nightly-2022-12-01
toolchain: nightly-2025-04-23
components: rustfmt
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
@ -362,7 +362,7 @@ jobs:
postgres:${{ matrix.job.postgres-version }}
- name: Install Rust
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
uses: dtolnay/rust-toolchain@c1678930c21fb233e4987c4ae12158f9125e5762 # 1.81.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
@ -404,7 +404,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
uses: dtolnay/rust-toolchain@c1678930c21fb233e4987c4ae12158f9125e5762 # 1.81.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
# There aren't wheels for some of the older deps, so we need to install
@ -519,7 +519,7 @@ jobs:
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Install Rust
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
uses: dtolnay/rust-toolchain@c1678930c21fb233e4987c4ae12158f9125e5762 # 1.81.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- name: Run SyTest
@ -663,7 +663,7 @@ jobs:
path: synapse
- name: Install Rust
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
uses: dtolnay/rust-toolchain@c1678930c21fb233e4987c4ae12158f9125e5762 # 1.81.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- name: Prepare Complement's Prerequisites
@ -695,7 +695,7 @@ jobs:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Rust
uses: dtolnay/rust-toolchain@e05ebb0e73db581a4877c6ce762e29fe1e0b5073 # 1.66.0
uses: dtolnay/rust-toolchain@c1678930c21fb233e4987c4ae12158f9125e5762 # 1.81.0
- uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
- run: cargo test

View File

@ -1,3 +1,51 @@
# Synapse 1.133.0rc1 (2025-06-24)
### Features
- Add support for the [MSC4260 user report API](https://github.com/matrix-org/matrix-spec-proposals/pull/4260). ([\#18120](https://github.com/element-hq/synapse/issues/18120))
### Bugfixes
- Fix an issue where, during state resolution for v11 rooms, Synapse would incorrectly calculate the power level of the creator when there was no power levels event in the room. ([\#18534](https://github.com/element-hq/synapse/issues/18534), [\#18547](https://github.com/element-hq/synapse/issues/18547))
- Fix long-standing bug where sliding sync did not honour the `room_id_to_include` config option. ([\#18535](https://github.com/element-hq/synapse/issues/18535))
- Fix an issue where "Lock timeout is getting excessive" warnings would be logged even when the lock timeout was <10 minutes. ([\#18543](https://github.com/element-hq/synapse/issues/18543))
- Fix an issue where Synapse could calculate the wrong power level for the creator of the room if there was no power levels event. ([\#18545](https://github.com/element-hq/synapse/issues/18545))
### Improved Documentation
- Generate config documentation from JSON Schema file. ([\#18528](https://github.com/element-hq/synapse/issues/18528))
- Fix typo in user type documentation. ([\#18568](https://github.com/element-hq/synapse/issues/18568))
### Internal Changes
- Increase performance of introspecting access tokens when using delegated auth. ([\#18357](https://github.com/element-hq/synapse/issues/18357), [\#18561](https://github.com/element-hq/synapse/issues/18561))
- Log user deactivations. ([\#18541](https://github.com/element-hq/synapse/issues/18541))
- Enable [`flake8-logging`](https://docs.astral.sh/ruff/rules/#flake8-logging-log) and [`flake8-logging-format`](https://docs.astral.sh/ruff/rules/#flake8-logging-format-g) rules in Ruff and fix related issues throughout the codebase. ([\#18542](https://github.com/element-hq/synapse/issues/18542))
- Clean up old, unused rows from the `device_federation_inbox` table. ([\#18546](https://github.com/element-hq/synapse/issues/18546))
- Run config schema CI on develop and release branches. ([\#18551](https://github.com/element-hq/synapse/issues/18551))
- Add support for Twisted `25.5.0`+ releases. ([\#18577](https://github.com/element-hq/synapse/issues/18577))
- Update PyO3 to version 0.25. ([\#18578](https://github.com/element-hq/synapse/issues/18578))
### Updates to locked dependencies
* Bump actions/setup-python from 5.5.0 to 5.6.0. ([\#18555](https://github.com/element-hq/synapse/issues/18555))
* Bump base64 from 0.21.7 to 0.22.1. ([\#18559](https://github.com/element-hq/synapse/issues/18559))
* Bump dawidd6/action-download-artifact from 9 to 11. ([\#18556](https://github.com/element-hq/synapse/issues/18556))
* Bump headers from 0.4.0 to 0.4.1. ([\#18529](https://github.com/element-hq/synapse/issues/18529))
* Bump requests from 2.32.2 to 2.32.4. ([\#18533](https://github.com/element-hq/synapse/issues/18533))
* Bump types-requests from 2.32.0.20250328 to 2.32.4.20250611. ([\#18558](https://github.com/element-hq/synapse/issues/18558))
# Synapse 1.132.0 (2025-06-17)
### Improved Documentation
- Improvements to generate config documentation from JSON Schema file. ([\#18522](https://github.com/element-hq/synapse/issues/18522))
# Synapse 1.132.0rc1 (2025-06-10)
### Features

1266
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1 @@
Support for [MSC4235](https://github.com/matrix-org/matrix-spec-proposals/pull/4235): via query param for hierarchy endpoint. Contributed by Krishan (@kfiven).

View File

@ -0,0 +1 @@
Add `forget_forced_upon_leave` capability as per [MSC4267](https://github.com/matrix-org/matrix-spec-proposals/pull/4267).

View File

@ -0,0 +1 @@
Add `federated_user_may_invite` spam checker callback which receives the entire invite event. Contributed by @tulir @ Beeper.

1
changelog.d/18509.bugfix Normal file
View File

@ -0,0 +1 @@
Fix `KeyError` on background updates when using split main/state databases.

1
changelog.d/18573.misc Normal file
View File

@ -0,0 +1 @@
Improve docstring on `simple_upsert_many`.

1
changelog.d/18582.bugfix Normal file
View File

@ -0,0 +1 @@
Improve performance of device deletion by adding missing index.

1
changelog.d/18595.misc Normal file
View File

@ -0,0 +1 @@
Better handling of ratelimited requests.

1
changelog.d/18600.misc Normal file
View File

@ -0,0 +1 @@
Better handling of ratelimited requests.

1
changelog.d/18602.misc Normal file
View File

@ -0,0 +1 @@
Speed up bulk device deletion.

1
changelog.d/18605.bugfix Normal file
View File

@ -0,0 +1 @@
Ensure policy servers are not asked to scan policy server change events, allowing rooms to disable the use of a policy server while the policy server is down.

12
debian/changelog vendored
View File

@ -1,3 +1,15 @@
matrix-synapse-py3 (1.133.0~rc1) stable; urgency=medium
* New Synapse release 1.133.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 24 Jun 2025 11:57:47 +0100
matrix-synapse-py3 (1.132.0) stable; urgency=medium
* New Synapse release 1.132.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 17 Jun 2025 13:16:20 +0100
matrix-synapse-py3 (1.132.0~rc1) stable; urgency=medium
* New Synapse release 1.132.0rc1.

View File

@ -80,6 +80,8 @@ Called when processing an invitation, both when one is created locally or when
receiving an invite over federation. Both inviter and invitee are represented by
their Matrix user ID (e.g. `@alice:example.com`).
Note that federated invites will call `federated_user_may_invite` before this callback.
The callback must return one of:
- `synapse.module_api.NOT_SPAM`, to allow the operation. Other callbacks may still
@ -97,6 +99,34 @@ be used. If this happens, Synapse will not call any of the subsequent implementa
this callback.
### `federated_user_may_invite`
_First introduced in Synapse v1.133.0_
```python
async def federated_user_may_invite(event: "synapse.events.EventBase") -> Union["synapse.module_api.NOT_SPAM", "synapse.module_api.errors.Codes", bool]
```
Called when processing an invitation received over federation. Unlike `user_may_invite`,
this callback receives the entire event, including any stripped state in the `unsigned`
section, not just the room and user IDs.
The callback must return one of:
- `synapse.module_api.NOT_SPAM`, to allow the operation. Other callbacks may still
decide to reject it.
- `synapse.module_api.errors.Codes` to reject the operation with an error code. In case
of doubt, `synapse.module_api.errors.Codes.FORBIDDEN` is a good error code.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `synapse.module_api.NOT_SPAM`, Synapse falls through to the next one.
The value of the first callback that does not return `synapse.module_api.NOT_SPAM` will
be used. If this happens, Synapse will not call any of the subsequent implementations of
this callback.
If all of the callbacks return `synapse.module_api.NOT_SPAM`, Synapse will also fall
through to the `user_may_invite` callback before approving the invite.
### `user_may_send_3pid_invite`
_First introduced in Synapse v1.45.0_

View File

@ -764,22 +764,23 @@ max_event_delay_duration: 24h
---
### `user_types`
Configuration settings related to the user types feature.
*(object)* Configuration settings related to the user types feature.
This setting has the following sub-options:
* `default_user_type`: The default user type to use for registering new users when no value has been specified.
Defaults to none.
* `extra_user_types`: Array of additional user types to allow. These are treated as real users. Defaults to [].
* `default_user_type` (string|null): The default user type to use for registering new users when no value has been specified. Defaults to none. Defaults to `null`.
* `extra_user_types` (array): Array of additional user types to allow. These are treated as real users. Defaults to `[]`.
Example configuration:
```yaml
user_types:
default_user_type: "custom"
default_user_type: custom
extra_user_types:
- "custom"
- "custom2"
- custom
- custom2
```
---
## Homeserver blocking
Useful options for Synapse admins.
@ -1936,6 +1937,33 @@ rc_delayed_event_mgmt:
burst_count: 20.0
```
---
### `rc_reports`
*(object)* Ratelimiting settings for reporting content.
This is a ratelimiting option that ratelimits reports made by users about content they see.
Setting this to a high value allows users to report content quickly, possibly in duplicate. This can result in higher database usage.
This setting has the following sub-options:
* `per_second` (number): Maximum number of requests a client can send per second.
* `burst_count` (number): Maximum number of requests a client can send before being throttled.
Default configuration:
```yaml
rc_reports:
per_user:
per_second: 1.0
burst_count: 5.0
```
Example configuration:
```yaml
rc_reports:
per_second: 2.0
burst_count: 20.0
```
---
### `federation_rr_transactions_per_room_per_second`
*(integer)* Sets outgoing federation transaction frequency for sending read-receipts, per-room.

129
poetry.lock generated
View File

@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 2.1.1 and should not be changed by hand.
# This file is automatically @generated by Poetry 2.1.3 and should not be changed by hand.
[[package]]
name = "annotated-types"
@ -39,7 +39,7 @@ description = "The ultimate Python library in building OAuth and OpenID Connect
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"all\" or extra == \"jwt\" or extra == \"oidc\""
markers = "extra == \"oidc\" or extra == \"jwt\" or extra == \"all\""
files = [
{file = "authlib-1.5.2-py2.py3-none-any.whl", hash = "sha256:8804dd4402ac5e4a0435ac49e0b6e19e395357cfa632a3f624dcb4f6df13b4b1"},
{file = "authlib-1.5.2.tar.gz", hash = "sha256:fe85ec7e50c5f86f1e2603518bb3b4f632985eb4a355e52256530790e326c512"},
@ -50,19 +50,18 @@ cryptography = "*"
[[package]]
name = "automat"
version = "22.10.0"
version = "25.4.16"
description = "Self-service finite-state machines for the programmer on the go."
optional = false
python-versions = "*"
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "Automat-22.10.0-py2.py3-none-any.whl", hash = "sha256:c3164f8742b9dc440f3682482d32aaff7bb53f71740dd018533f9de286b64180"},
{file = "Automat-22.10.0.tar.gz", hash = "sha256:e56beb84edad19dcc11d30e8d9b895f75deeb5ef5e96b84a467066b3b84bb04e"},
{file = "automat-25.4.16-py3-none-any.whl", hash = "sha256:04e9bce696a8d5671ee698005af6e5a9fa15354140a87f4870744604dcdd3ba1"},
{file = "automat-25.4.16.tar.gz", hash = "sha256:0017591a5477066e90d26b0e696ddc143baafd87b588cfac8100bc6be9634de0"},
]
[package.dependencies]
attrs = ">=19.2.0"
six = "*"
typing_extensions = {version = "*", markers = "python_version < \"3.10\""}
[package.extras]
visualize = ["Twisted (>=16.1.1)", "graphviz (>0.5.1)"]
@ -451,7 +450,7 @@ description = "XML bomb protection for Python stdlib modules"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
@ -494,7 +493,7 @@ description = "XPath 1.0/2.0/3.0/3.1 parsers and selectors for ElementTree and l
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "elementpath-4.1.5-py3-none-any.whl", hash = "sha256:2ac1a2fb31eb22bbbf817f8cf6752f844513216263f0e3892c8e79782fe4bb55"},
{file = "elementpath-4.1.5.tar.gz", hash = "sha256:c2d6dc524b29ef751ecfc416b0627668119d8812441c555d7471da41d4bacb8d"},
@ -544,7 +543,7 @@ description = "Python wrapper for hiredis"
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"all\" or extra == \"redis\""
markers = "extra == \"redis\" or extra == \"all\""
files = [
{file = "hiredis-3.1.0-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:2892db9db21f0cf7cc298d09f85d3e1f6dc4c4c24463ab67f79bc7a006d51867"},
{file = "hiredis-3.1.0-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:93cfa6cc25ee2ceb0be81dc61eca9995160b9e16bdb7cca4a00607d57e998918"},
@ -890,7 +889,7 @@ description = "Jaeger Python OpenTracing Tracer implementation"
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "jaeger-client-4.8.0.tar.gz", hash = "sha256:3157836edab8e2c209bd2d6ae61113db36f7ee399e66b1dcbb715d87ab49bfe0"},
]
@ -1028,7 +1027,7 @@ description = "A strictly RFC 4510 conforming LDAP V3 pure Python client library
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"matrix-synapse-ldap3\""
markers = "extra == \"matrix-synapse-ldap3\" or extra == \"all\""
files = [
{file = "ldap3-2.9.1-py2.py3-none-any.whl", hash = "sha256:5869596fc4948797020d3f03b7939da938778a0f9e2009f7a072ccf92b8e8d70"},
{file = "ldap3-2.9.1.tar.gz", hash = "sha256:f3e7fc4718e3f09dda568b57100095e0ce58633bcabbed8667ce3f8fbaa4229f"},
@ -1044,7 +1043,7 @@ description = "Powerful and Pythonic XML processing library combining libxml2/li
optional = true
python-versions = ">=3.6"
groups = ["main"]
markers = "extra == \"all\" or extra == \"url-preview\""
markers = "extra == \"url-preview\" or extra == \"all\""
files = [
{file = "lxml-5.4.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e7bc6df34d42322c5289e37e9971d6ed114e3776b45fa879f734bded9d1fea9c"},
{file = "lxml-5.4.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6854f8bd8a1536f8a1d9a3655e6354faa6406621cf857dc27b681b69860645c7"},
@ -1324,7 +1323,7 @@ description = "An LDAP3 auth provider for Synapse"
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"matrix-synapse-ldap3\""
markers = "extra == \"matrix-synapse-ldap3\" or extra == \"all\""
files = [
{file = "matrix-synapse-ldap3-0.3.0.tar.gz", hash = "sha256:8bb6517173164d4b9cc44f49de411d8cebdb2e705d5dd1ea1f38733c4a009e1d"},
{file = "matrix_synapse_ldap3-0.3.0-py3-none-any.whl", hash = "sha256:8b4d701f8702551e98cc1d8c20dbed532de5613584c08d0df22de376ba99159d"},
@ -1545,7 +1544,7 @@ description = "OpenTracing API for Python. See documentation at http://opentraci
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "opentracing-2.4.0.tar.gz", hash = "sha256:a173117e6ef580d55874734d1fa7ecb6f3655160b8b8974a2a1e98e5ec9c840d"},
]
@ -1714,7 +1713,7 @@ description = "psycopg2 - Python-PostgreSQL Database Adapter"
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"all\" or extra == \"postgres\""
markers = "extra == \"postgres\" or extra == \"all\""
files = [
{file = "psycopg2-2.9.10-cp310-cp310-win32.whl", hash = "sha256:5df2b672140f95adb453af93a7d669d7a7bf0a56bcd26f1502329166f4a61716"},
{file = "psycopg2-2.9.10-cp310-cp310-win_amd64.whl", hash = "sha256:c6f7b8561225f9e711a9c47087388a97fdc948211c10a4bccbf0ba68ab7b3b5a"},
@ -1722,7 +1721,6 @@ files = [
{file = "psycopg2-2.9.10-cp311-cp311-win_amd64.whl", hash = "sha256:0435034157049f6846e95103bd8f5a668788dd913a7c30162ca9503fdf542cb4"},
{file = "psycopg2-2.9.10-cp312-cp312-win32.whl", hash = "sha256:65a63d7ab0e067e2cdb3cf266de39663203d38d6a8ed97f5ca0cb315c73fe067"},
{file = "psycopg2-2.9.10-cp312-cp312-win_amd64.whl", hash = "sha256:4a579d6243da40a7b3182e0430493dbd55950c493d8c68f4eec0b302f6bbf20e"},
{file = "psycopg2-2.9.10-cp313-cp313-win_amd64.whl", hash = "sha256:91fd603a2155da8d0cfcdbf8ab24a2d54bca72795b90d2a3ed2b6da8d979dee2"},
{file = "psycopg2-2.9.10-cp39-cp39-win32.whl", hash = "sha256:9d5b3b94b79a844a986d029eee38998232451119ad653aea42bb9220a8c5066b"},
{file = "psycopg2-2.9.10-cp39-cp39-win_amd64.whl", hash = "sha256:88138c8dedcbfa96408023ea2b0c369eda40fe5d75002c0964c78f46f11fa442"},
{file = "psycopg2-2.9.10.tar.gz", hash = "sha256:12ec0b40b0273f95296233e8750441339298e6a572f7039da5b260e3c8b60e11"},
@ -1735,7 +1733,7 @@ description = ".. image:: https://travis-ci.org/chtd/psycopg2cffi.svg?branch=mas
optional = true
python-versions = "*"
groups = ["main"]
markers = "platform_python_implementation == \"PyPy\" and (extra == \"all\" or extra == \"postgres\")"
markers = "platform_python_implementation == \"PyPy\" and (extra == \"postgres\" or extra == \"all\")"
files = [
{file = "psycopg2cffi-2.9.0.tar.gz", hash = "sha256:7e272edcd837de3a1d12b62185eb85c45a19feda9e62fa1b120c54f9e8d35c52"},
]
@ -1751,7 +1749,7 @@ description = "A Simple library to enable psycopg2 compatability"
optional = true
python-versions = "*"
groups = ["main"]
markers = "platform_python_implementation == \"PyPy\" and (extra == \"all\" or extra == \"postgres\")"
markers = "platform_python_implementation == \"PyPy\" and (extra == \"postgres\" or extra == \"all\")"
files = [
{file = "psycopg2cffi-compat-1.1.tar.gz", hash = "sha256:d25e921748475522b33d13420aad5c2831c743227dc1f1f2585e0fdb5c914e05"},
]
@ -1773,18 +1771,18 @@ files = [
[[package]]
name = "pyasn1-modules"
version = "0.4.1"
version = "0.4.2"
description = "A collection of ASN.1-based protocols modules"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "pyasn1_modules-0.4.1-py3-none-any.whl", hash = "sha256:49bfa96b45a292b711e986f222502c1c9a5e1f4e568fc30e2574a6c7d07838fd"},
{file = "pyasn1_modules-0.4.1.tar.gz", hash = "sha256:c28e2dbf9c06ad61c71a075c7e0f9fd0f1b0bb2d2ad4377f240d33ac2ab60a7c"},
{file = "pyasn1_modules-0.4.2-py3-none-any.whl", hash = "sha256:29253a9207ce32b64c3ac6600edc75368f98473906e8fd1043bd6b5b1de2c14a"},
{file = "pyasn1_modules-0.4.2.tar.gz", hash = "sha256:677091de870a80aae844b1ca6134f54652fa2c8c5a52aa396440ac3106e941e6"},
]
[package.dependencies]
pyasn1 = ">=0.4.6,<0.7.0"
pyasn1 = ">=0.6.1,<0.7.0"
[[package]]
name = "pycparser"
@ -1974,7 +1972,7 @@ description = "Python extension wrapping the ICU C++ API"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"user-search\""
markers = "extra == \"user-search\" or extra == \"all\""
files = [
{file = "PyICU-2.14.tar.gz", hash = "sha256:acc7eb92bd5c554ed577249c6978450a4feda0aa6f01470152b3a7b382a02132"},
]
@ -2023,7 +2021,7 @@ description = "A development tool to measure, monitor and analyze the memory beh
optional = true
python-versions = ">=3.6"
groups = ["main"]
markers = "extra == \"all\" or extra == \"cache-memory\""
markers = "extra == \"cache-memory\" or extra == \"all\""
files = [
{file = "Pympler-1.0.1-py3-none-any.whl", hash = "sha256:d260dda9ae781e1eab6ea15bacb84015849833ba5555f141d2d9b7b7473b307d"},
{file = "Pympler-1.0.1.tar.gz", hash = "sha256:993f1a3599ca3f4fcd7160c7545ad06310c9e12f70174ae7ae8d4e25f6c5d3fa"},
@ -2083,7 +2081,7 @@ description = "Python implementation of SAML Version 2 Standard"
optional = true
python-versions = ">=3.9,<4.0"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "pysaml2-7.5.0-py3-none-any.whl", hash = "sha256:bc6627cc344476a83c757f440a73fda1369f13b6fda1b4e16bca63ffbabb5318"},
{file = "pysaml2-7.5.0.tar.gz", hash = "sha256:f36871d4e5ee857c6b85532e942550d2cf90ea4ee943d75eb681044bbc4f54f7"},
@ -2108,7 +2106,7 @@ description = "Extensions to the standard Python datetime module"
optional = true
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
@ -2136,7 +2134,7 @@ description = "World timezone definitions, modern and historical"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "pytz-2022.7.1-py2.py3-none-any.whl", hash = "sha256:78f4f37d8198e0627c5f1143240bb0206b8691d8d7ac6d78fee88b78733f8c4a"},
{file = "pytz-2022.7.1.tar.gz", hash = "sha256:01a0681c4b9684a28304615eba55d1ab31ae00bf68ec157ec3708a8182dbbcd0"},
@ -2256,19 +2254,19 @@ rpds-py = ">=0.7.0"
[[package]]
name = "requests"
version = "2.32.2"
version = "2.32.4"
description = "Python HTTP for Humans."
optional = false
python-versions = ">=3.8"
groups = ["main", "dev"]
files = [
{file = "requests-2.32.2-py3-none-any.whl", hash = "sha256:fc06670dd0ed212426dfeb94fc1b983d917c4f9847c863f313c9dfaaffb7c23c"},
{file = "requests-2.32.2.tar.gz", hash = "sha256:dd951ff5ecf3e3b3aa26b40703ba77495dab41da839ae72ef3c8e5d8e2433289"},
{file = "requests-2.32.4-py3-none-any.whl", hash = "sha256:27babd3cda2a6d50b30443204ee89830707d396671944c998b5975b031ac2b2c"},
{file = "requests-2.32.4.tar.gz", hash = "sha256:27d0316682c8a29834d3264820024b62a36942083d52caf2f14c0591336d3422"},
]
[package.dependencies]
certifi = ">=2017.4.17"
charset-normalizer = ">=2,<4"
charset_normalizer = ">=2,<4"
idna = ">=2.5,<4"
urllib3 = ">=1.21.1,<3"
@ -2500,7 +2498,7 @@ description = "Python client for Sentry (https://sentry.io)"
optional = true
python-versions = ">=3.6"
groups = ["main"]
markers = "extra == \"all\" or extra == \"sentry\""
markers = "extra == \"sentry\" or extra == \"all\""
files = [
{file = "sentry_sdk-2.22.0-py2.py3-none-any.whl", hash = "sha256:3d791d631a6c97aad4da7074081a57073126c69487560c6f8bffcf586461de66"},
{file = "sentry_sdk-2.22.0.tar.gz", hash = "sha256:b4bf43bb38f547c84b2eadcefbe389b36ef75f3f38253d7a74d6b928c07ae944"},
@ -2688,7 +2686,7 @@ description = "Tornado IOLoop Backed Concurrent Futures"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "threadloop-1.0.2-py2-none-any.whl", hash = "sha256:5c90dbefab6ffbdba26afb4829d2a9df8275d13ac7dc58dccb0e279992679599"},
{file = "threadloop-1.0.2.tar.gz", hash = "sha256:8b180aac31013de13c2ad5c834819771992d350267bddb854613ae77ef571944"},
@ -2704,7 +2702,7 @@ description = "Python bindings for the Apache Thrift RPC system"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "thrift-0.16.0.tar.gz", hash = "sha256:2b5b6488fcded21f9d312aa23c9ff6a0195d0f6ae26ddbd5ad9e3e25dfc14408"},
]
@ -2766,7 +2764,7 @@ description = "Tornado is a Python web framework and asynchronous networking lib
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "tornado-6.5-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:f81067dad2e4443b015368b24e802d0083fecada4f0a4572fdb72fc06e54a9a6"},
{file = "tornado-6.5-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:9ac1cbe1db860b3cbb251e795c701c41d343f06a96049d6274e7c77559117e41"},
@ -2857,19 +2855,19 @@ keyring = ["keyring (>=15.1)"]
[[package]]
name = "twisted"
version = "24.7.0"
version = "25.5.0"
description = "An asynchronous networking framework written in Python"
optional = false
python-versions = ">=3.8.0"
groups = ["main"]
files = [
{file = "twisted-24.7.0-py3-none-any.whl", hash = "sha256:734832ef98108136e222b5230075b1079dad8a3fc5637319615619a7725b0c81"},
{file = "twisted-24.7.0.tar.gz", hash = "sha256:5a60147f044187a127ec7da96d170d49bcce50c6fd36f594e60f4587eff4d394"},
{file = "twisted-25.5.0-py3-none-any.whl", hash = "sha256:8559f654d01a54a8c3efe66d533d43f383531ebf8d81d9f9ab4769d91ca15df7"},
{file = "twisted-25.5.0.tar.gz", hash = "sha256:1deb272358cb6be1e3e8fc6f9c8b36f78eb0fa7c2233d2dbe11ec6fee04ea316"},
]
[package.dependencies]
attrs = ">=21.3.0"
automat = ">=0.8.0"
attrs = ">=22.2.0"
automat = ">=24.8.0"
constantly = ">=15.1"
hyperlink = ">=17.1.1"
idna = {version = ">=2.4", optional = true, markers = "extra == \"tls\""}
@ -2880,19 +2878,20 @@ typing-extensions = ">=4.2.0"
zope-interface = ">=5"
[package.extras]
all-non-platform = ["appdirs (>=1.4.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "bcrypt (>=3.1.3)", "cryptography (>=3.3)", "cryptography (>=3.3)", "cython-test-exception-raiser (>=1.0.2,<2)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "h2 (>=3.0,<5.0)", "hypothesis (>=6.56)", "hypothesis (>=6.56)", "idna (>=2.4)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "priority (>=1.1.0,<2.0)", "pyhamcrest (>=2)", "pyhamcrest (>=2)", "pyopenssl (>=21.0.0)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pyserial (>=3.0)", "pywin32 (!=226) ; platform_system == \"Windows\"", "pywin32 (!=226) ; platform_system == \"Windows\"", "service-identity (>=18.1.0)", "service-identity (>=18.1.0)"]
all-non-platform = ["appdirs (>=1.4.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "bcrypt (>=3.1.3)", "cryptography (>=3.3)", "cryptography (>=3.3)", "cython-test-exception-raiser (>=1.0.2,<2)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.2,<5.0)", "h2 (>=3.2,<5.0)", "httpx[http2] (>=0.27)", "httpx[http2] (>=0.27)", "hypothesis (>=6.56)", "hypothesis (>=6.56)", "idna (>=2.4)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "priority (>=1.1.0,<2.0)", "pyhamcrest (>=2)", "pyhamcrest (>=2)", "pyopenssl (>=21.0.0)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pyserial (>=3.0)", "pywin32 (!=226) ; platform_system == \"Windows\"", "pywin32 (!=226) ; platform_system == \"Windows\"", "service-identity (>=18.1.0)", "service-identity (>=18.1.0)", "wsproto", "wsproto"]
conch = ["appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "cryptography (>=3.3)"]
dev = ["coverage (>=7.5,<8.0)", "cython-test-exception-raiser (>=1.0.2,<2)", "hypothesis (>=6.56)", "pydoctor (>=23.9.0,<23.10.0)", "pyflakes (>=2.2,<3.0)", "pyhamcrest (>=2)", "python-subunit (>=1.4,<2.0)", "sphinx (>=6,<7)", "sphinx-rtd-theme (>=1.3,<2.0)", "towncrier (>=23.6,<24.0)", "twistedchecker (>=0.7,<1.0)"]
dev-release = ["pydoctor (>=23.9.0,<23.10.0)", "pydoctor (>=23.9.0,<23.10.0)", "sphinx (>=6,<7)", "sphinx (>=6,<7)", "sphinx-rtd-theme (>=1.3,<2.0)", "sphinx-rtd-theme (>=1.3,<2.0)", "towncrier (>=23.6,<24.0)", "towncrier (>=23.6,<24.0)"]
gtk-platform = ["appdirs (>=1.4.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "bcrypt (>=3.1.3)", "cryptography (>=3.3)", "cryptography (>=3.3)", "cython-test-exception-raiser (>=1.0.2,<2)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "h2 (>=3.0,<5.0)", "hypothesis (>=6.56)", "hypothesis (>=6.56)", "idna (>=2.4)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "priority (>=1.1.0,<2.0)", "pygobject", "pygobject", "pyhamcrest (>=2)", "pyhamcrest (>=2)", "pyopenssl (>=21.0.0)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pyserial (>=3.0)", "pywin32 (!=226) ; platform_system == \"Windows\"", "pywin32 (!=226) ; platform_system == \"Windows\"", "service-identity (>=18.1.0)", "service-identity (>=18.1.0)"]
http2 = ["h2 (>=3.0,<5.0)", "priority (>=1.1.0,<2.0)"]
macos-platform = ["appdirs (>=1.4.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "bcrypt (>=3.1.3)", "cryptography (>=3.3)", "cryptography (>=3.3)", "cython-test-exception-raiser (>=1.0.2,<2)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "h2 (>=3.0,<5.0)", "hypothesis (>=6.56)", "hypothesis (>=6.56)", "idna (>=2.4)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "priority (>=1.1.0,<2.0)", "pyhamcrest (>=2)", "pyhamcrest (>=2)", "pyobjc-core", "pyobjc-core", "pyobjc-framework-cfnetwork", "pyobjc-framework-cfnetwork", "pyobjc-framework-cocoa", "pyobjc-framework-cocoa", "pyopenssl (>=21.0.0)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pyserial (>=3.0)", "pywin32 (!=226) ; platform_system == \"Windows\"", "pywin32 (!=226) ; platform_system == \"Windows\"", "service-identity (>=18.1.0)", "service-identity (>=18.1.0)"]
mypy = ["appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "coverage (>=7.5,<8.0)", "cryptography (>=3.3)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "hypothesis (>=6.56)", "idna (>=2.4)", "mypy (>=1.8,<2.0)", "mypy-zope (>=1.0.3,<1.1.0)", "priority (>=1.1.0,<2.0)", "pydoctor (>=23.9.0,<23.10.0)", "pyflakes (>=2.2,<3.0)", "pyhamcrest (>=2)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "python-subunit (>=1.4,<2.0)", "pywin32 (!=226) ; platform_system == \"Windows\"", "service-identity (>=18.1.0)", "sphinx (>=6,<7)", "sphinx-rtd-theme (>=1.3,<2.0)", "towncrier (>=23.6,<24.0)", "twistedchecker (>=0.7,<1.0)", "types-pyopenssl", "types-setuptools"]
osx-platform = ["appdirs (>=1.4.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "bcrypt (>=3.1.3)", "cryptography (>=3.3)", "cryptography (>=3.3)", "cython-test-exception-raiser (>=1.0.2,<2)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "h2 (>=3.0,<5.0)", "hypothesis (>=6.56)", "hypothesis (>=6.56)", "idna (>=2.4)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "priority (>=1.1.0,<2.0)", "pyhamcrest (>=2)", "pyhamcrest (>=2)", "pyobjc-core", "pyobjc-core", "pyobjc-framework-cfnetwork", "pyobjc-framework-cfnetwork", "pyobjc-framework-cocoa", "pyobjc-framework-cocoa", "pyopenssl (>=21.0.0)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pyserial (>=3.0)", "pywin32 (!=226) ; platform_system == \"Windows\"", "pywin32 (!=226) ; platform_system == \"Windows\"", "service-identity (>=18.1.0)", "service-identity (>=18.1.0)"]
dev = ["coverage (>=7.5,<8.0)", "cython-test-exception-raiser (>=1.0.2,<2)", "httpx[http2] (>=0.27)", "hypothesis (>=6.56)", "pydoctor (>=24.11.1,<24.12.0)", "pyflakes (>=2.2,<3.0)", "pyhamcrest (>=2)", "python-subunit (>=1.4,<2.0)", "sphinx (>=6,<7)", "sphinx-rtd-theme (>=1.3,<2.0)", "towncrier (>=23.6,<24.0)", "twistedchecker (>=0.7,<1.0)"]
dev-release = ["pydoctor (>=24.11.1,<24.12.0)", "pydoctor (>=24.11.1,<24.12.0)", "sphinx (>=6,<7)", "sphinx (>=6,<7)", "sphinx-rtd-theme (>=1.3,<2.0)", "sphinx-rtd-theme (>=1.3,<2.0)", "towncrier (>=23.6,<24.0)", "towncrier (>=23.6,<24.0)"]
gtk-platform = ["appdirs (>=1.4.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "bcrypt (>=3.1.3)", "cryptography (>=3.3)", "cryptography (>=3.3)", "cython-test-exception-raiser (>=1.0.2,<2)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.2,<5.0)", "h2 (>=3.2,<5.0)", "httpx[http2] (>=0.27)", "httpx[http2] (>=0.27)", "hypothesis (>=6.56)", "hypothesis (>=6.56)", "idna (>=2.4)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "priority (>=1.1.0,<2.0)", "pygobject", "pygobject", "pyhamcrest (>=2)", "pyhamcrest (>=2)", "pyopenssl (>=21.0.0)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pyserial (>=3.0)", "pywin32 (!=226) ; platform_system == \"Windows\"", "pywin32 (!=226) ; platform_system == \"Windows\"", "service-identity (>=18.1.0)", "service-identity (>=18.1.0)", "wsproto", "wsproto"]
http2 = ["h2 (>=3.2,<5.0)", "priority (>=1.1.0,<2.0)"]
macos-platform = ["appdirs (>=1.4.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "bcrypt (>=3.1.3)", "cryptography (>=3.3)", "cryptography (>=3.3)", "cython-test-exception-raiser (>=1.0.2,<2)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.2,<5.0)", "h2 (>=3.2,<5.0)", "httpx[http2] (>=0.27)", "httpx[http2] (>=0.27)", "hypothesis (>=6.56)", "hypothesis (>=6.56)", "idna (>=2.4)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "priority (>=1.1.0,<2.0)", "pyhamcrest (>=2)", "pyhamcrest (>=2)", "pyobjc-core (<11) ; python_version < \"3.9\"", "pyobjc-core (<11) ; python_version < \"3.9\"", "pyobjc-core ; python_version >= \"3.9\"", "pyobjc-core ; python_version >= \"3.9\"", "pyobjc-framework-cfnetwork (<11) ; python_version < \"3.9\"", "pyobjc-framework-cfnetwork (<11) ; python_version < \"3.9\"", "pyobjc-framework-cfnetwork ; python_version >= \"3.9\"", "pyobjc-framework-cfnetwork ; python_version >= \"3.9\"", "pyobjc-framework-cocoa (<11) ; python_version < \"3.9\"", "pyobjc-framework-cocoa (<11) ; python_version < \"3.9\"", "pyobjc-framework-cocoa ; python_version >= \"3.9\"", "pyobjc-framework-cocoa ; python_version >= \"3.9\"", "pyopenssl (>=21.0.0)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pyserial (>=3.0)", "pywin32 (!=226) ; platform_system == \"Windows\"", "pywin32 (!=226) ; platform_system == \"Windows\"", "service-identity (>=18.1.0)", "service-identity (>=18.1.0)", "wsproto", "wsproto"]
mypy = ["appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "coverage (>=7.5,<8.0)", "cryptography (>=3.3)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.2,<5.0)", "httpx[http2] (>=0.27)", "hypothesis (>=6.56)", "idna (>=2.4)", "mypy (==1.10.1)", "mypy-zope (==1.0.6)", "priority (>=1.1.0,<2.0)", "pydoctor (>=24.11.1,<24.12.0)", "pyflakes (>=2.2,<3.0)", "pyhamcrest (>=2)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "python-subunit (>=1.4,<2.0)", "pywin32 (!=226) ; platform_system == \"Windows\"", "service-identity (>=18.1.0)", "sphinx (>=6,<7)", "sphinx-rtd-theme (>=1.3,<2.0)", "towncrier (>=23.6,<24.0)", "twistedchecker (>=0.7,<1.0)", "types-pyopenssl", "types-setuptools", "wsproto"]
osx-platform = ["appdirs (>=1.4.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "bcrypt (>=3.1.3)", "cryptography (>=3.3)", "cryptography (>=3.3)", "cython-test-exception-raiser (>=1.0.2,<2)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.2,<5.0)", "h2 (>=3.2,<5.0)", "httpx[http2] (>=0.27)", "httpx[http2] (>=0.27)", "hypothesis (>=6.56)", "hypothesis (>=6.56)", "idna (>=2.4)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "priority (>=1.1.0,<2.0)", "pyhamcrest (>=2)", "pyhamcrest (>=2)", "pyobjc-core (<11) ; python_version < \"3.9\"", "pyobjc-core (<11) ; python_version < \"3.9\"", "pyobjc-core ; python_version >= \"3.9\"", "pyobjc-core ; python_version >= \"3.9\"", "pyobjc-framework-cfnetwork (<11) ; python_version < \"3.9\"", "pyobjc-framework-cfnetwork (<11) ; python_version < \"3.9\"", "pyobjc-framework-cfnetwork ; python_version >= \"3.9\"", "pyobjc-framework-cfnetwork ; python_version >= \"3.9\"", "pyobjc-framework-cocoa (<11) ; python_version < \"3.9\"", "pyobjc-framework-cocoa (<11) ; python_version < \"3.9\"", "pyobjc-framework-cocoa ; python_version >= \"3.9\"", "pyobjc-framework-cocoa ; python_version >= \"3.9\"", "pyopenssl (>=21.0.0)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pyserial (>=3.0)", "pywin32 (!=226) ; platform_system == \"Windows\"", "pywin32 (!=226) ; platform_system == \"Windows\"", "service-identity (>=18.1.0)", "service-identity (>=18.1.0)", "wsproto", "wsproto"]
serial = ["pyserial (>=3.0)", "pywin32 (!=226) ; platform_system == \"Windows\""]
test = ["cython-test-exception-raiser (>=1.0.2,<2)", "hypothesis (>=6.56)", "pyhamcrest (>=2)"]
test = ["cython-test-exception-raiser (>=1.0.2,<2)", "httpx[http2] (>=0.27)", "hypothesis (>=6.56)", "pyhamcrest (>=2)"]
tls = ["idna (>=2.4)", "pyopenssl (>=21.0.0)", "service-identity (>=18.1.0)"]
windows-platform = ["appdirs (>=1.4.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "bcrypt (>=3.1.3)", "cryptography (>=3.3)", "cryptography (>=3.3)", "cython-test-exception-raiser (>=1.0.2,<2)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "h2 (>=3.0,<5.0)", "hypothesis (>=6.56)", "hypothesis (>=6.56)", "idna (>=2.4)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "priority (>=1.1.0,<2.0)", "pyhamcrest (>=2)", "pyhamcrest (>=2)", "pyopenssl (>=21.0.0)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pyserial (>=3.0)", "pywin32 (!=226)", "pywin32 (!=226)", "pywin32 (!=226) ; platform_system == \"Windows\"", "pywin32 (!=226) ; platform_system == \"Windows\"", "service-identity (>=18.1.0)", "service-identity (>=18.1.0)", "twisted-iocpsupport (>=1.0.2)", "twisted-iocpsupport (>=1.0.2)"]
websocket = ["wsproto"]
windows-platform = ["appdirs (>=1.4.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.1.3)", "bcrypt (>=3.1.3)", "cryptography (>=3.3)", "cryptography (>=3.3)", "cython-test-exception-raiser (>=1.0.2,<2)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.2,<5.0)", "h2 (>=3.2,<5.0)", "httpx[http2] (>=0.27)", "httpx[http2] (>=0.27)", "hypothesis (>=6.56)", "hypothesis (>=6.56)", "idna (>=2.4)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "priority (>=1.1.0,<2.0)", "pyhamcrest (>=2)", "pyhamcrest (>=2)", "pyopenssl (>=21.0.0)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pyserial (>=3.0)", "pywin32 (!=226)", "pywin32 (!=226)", "pywin32 (!=226) ; platform_system == \"Windows\"", "pywin32 (!=226) ; platform_system == \"Windows\"", "service-identity (>=18.1.0)", "service-identity (>=18.1.0)", "twisted-iocpsupport (>=1.0.2)", "twisted-iocpsupport (>=1.0.2)", "wsproto", "wsproto"]
[[package]]
name = "txredisapi"
@ -2901,7 +2900,7 @@ description = "non-blocking redis client for python"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"redis\""
markers = "extra == \"redis\" or extra == \"all\""
files = [
{file = "txredisapi-1.4.11-py3-none-any.whl", hash = "sha256:ac64d7a9342b58edca13ef267d4fa7637c1aa63f8595e066801c1e8b56b22d0b"},
{file = "txredisapi-1.4.11.tar.gz", hash = "sha256:3eb1af99aefdefb59eb877b1dd08861efad60915e30ad5bf3d5bf6c5cedcdbc6"},
@ -2994,14 +2993,14 @@ files = [
[[package]]
name = "types-opentracing"
version = "2.4.10.6"
version = "2.4.10.20250622"
description = "Typing stubs for opentracing"
optional = false
python-versions = "*"
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types-opentracing-2.4.10.6.tar.gz", hash = "sha256:87a1bdfce9de5e555e30497663583b9b9c3bb494d029ef9806aa1f137c19e744"},
{file = "types_opentracing-2.4.10.6-py3-none-any.whl", hash = "sha256:25914c834db033a4a38fc322df0b5e5e14503b0ac97f78304ae180d721555e97"},
{file = "types_opentracing-2.4.10.20250622-py3-none-any.whl", hash = "sha256:26bc21f9e385d54898b47e9bd1fa13f200c2dada50394f6eafd063ed53813062"},
{file = "types_opentracing-2.4.10.20250622.tar.gz", hash = "sha256:00db48b7f57136c45ac3250218bd0f18b9792566dfcbd5ad1de9f7e180347e74"},
]
[[package]]
@ -3058,14 +3057,14 @@ files = [
[[package]]
name = "types-requests"
version = "2.32.0.20250328"
version = "2.32.4.20250611"
description = "Typing stubs for requests"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_requests-2.32.0.20250328-py3-none-any.whl", hash = "sha256:72ff80f84b15eb3aa7a8e2625fffb6a93f2ad5a0c20215fc1dcfa61117bcb2a2"},
{file = "types_requests-2.32.0.20250328.tar.gz", hash = "sha256:c9e67228ea103bd811c96984fac36ed2ae8da87a36a633964a21f199d60baf32"},
{file = "types_requests-2.32.4.20250611-py3-none-any.whl", hash = "sha256:ad2fe5d3b0cb3c2c902c8815a70e7fb2302c4b8c1f77bdcd738192cdb3878072"},
{file = "types_requests-2.32.4.20250611.tar.gz", hash = "sha256:741c8777ed6425830bf51e54d6abe245f79b4dcb9019f1622b773463946bf826"},
]
[package.dependencies]
@ -3124,14 +3123,14 @@ files = [
[[package]]
name = "urllib3"
version = "2.2.2"
version = "2.5.0"
description = "HTTP library with thread-safe connection pooling, file post, and more."
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["main", "dev"]
files = [
{file = "urllib3-2.2.2-py3-none-any.whl", hash = "sha256:a448b2f64d686155468037e1ace9f2d2199776e17f0a46610480d311f73e3472"},
{file = "urllib3-2.2.2.tar.gz", hash = "sha256:dd505485549a7a552833da5e6063639d0d177c04f23bc3864e41e5dc5f612168"},
{file = "urllib3-2.5.0-py3-none-any.whl", hash = "sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc"},
{file = "urllib3-2.5.0.tar.gz", hash = "sha256:3fc47733c7e419d4bc3f6b3dc2b4f890bb743906a30d56ba4a5bfa4bbff92760"},
]
[package.extras]
@ -3244,7 +3243,7 @@ description = "An XML Schema validator and decoder"
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "xmlschema-2.4.0-py3-none-any.whl", hash = "sha256:dc87be0caaa61f42649899189aab2fd8e0d567f2cf548433ba7b79278d231a4a"},
{file = "xmlschema-2.4.0.tar.gz", hash = "sha256:d74cd0c10866ac609e1ef94a5a69b018ad16e39077bc6393408b40c6babee793"},

View File

@ -74,6 +74,10 @@ select = [
"PIE",
# flake8-executable
"EXE",
# flake8-logging
"LOG",
# flake8-logging-format
"G",
]
[tool.ruff.lint.isort]
@ -97,7 +101,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.132.0rc1"
version = "1.133.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"

View File

@ -7,7 +7,7 @@ name = "synapse"
version = "0.1.0"
edition = "2021"
rust-version = "1.66.0"
rust-version = "1.81.0"
[lib]
name = "synapse"
@ -30,19 +30,27 @@ http = "1.1.0"
lazy_static = "1.4.0"
log = "0.4.17"
mime = "0.3.17"
pyo3 = { version = "0.24.2", features = [
pyo3 = { version = "0.25.1", features = [
"macros",
"anyhow",
"abi3",
"abi3-py39",
] }
pyo3-log = "0.12.0"
pythonize = "0.24.0"
pyo3-log = "0.12.4"
pythonize = "0.25.0"
regex = "1.6.0"
sha2 = "0.10.8"
serde = { version = "1.0.144", features = ["derive"] }
serde_json = "1.0.85"
ulid = "1.1.2"
reqwest = { version = "0.12.15", default-features = false, features = [
"http2",
"stream",
"rustls-tls-native-roots",
] }
http-body-util = "0.1.3"
futures = "0.3.31"
tokio = { version = "1.44.2", features = ["rt", "rt-multi-thread"] }
[features]
extension-module = ["pyo3/extension-module"]

View File

@ -58,3 +58,15 @@ impl NotFoundError {
NotFoundError::new_err(())
}
}
import_exception!(synapse.api.errors, HttpResponseException);
impl HttpResponseException {
pub fn new(status: StatusCode, bytes: Vec<u8>) -> pyo3::PyErr {
HttpResponseException::new_err((
status.as_u16(),
status.canonical_reason().unwrap_or_default(),
bytes,
))
}
}

218
rust/src/http_client.rs Normal file
View File

@ -0,0 +1,218 @@
/*
* This file is licensed under the Affero General Public License (AGPL) version 3.
*
* Copyright (C) 2025 New Vector, Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* See the GNU Affero General Public License for more details:
* <https://www.gnu.org/licenses/agpl-3.0.html>.
*/
use std::{collections::HashMap, future::Future, panic::AssertUnwindSafe, sync::LazyLock};
use anyhow::Context;
use futures::{FutureExt, TryStreamExt};
use pyo3::{exceptions::PyException, prelude::*, types::PyString};
use reqwest::RequestBuilder;
use tokio::runtime::Runtime;
use crate::errors::HttpResponseException;
/// The tokio runtime that we're using to run async Rust libs.
static RUNTIME: LazyLock<Runtime> = LazyLock::new(|| {
tokio::runtime::Builder::new_multi_thread()
.worker_threads(4)
.enable_all()
.build()
.unwrap()
});
/// A reference to the `Deferred` python class.
static DEFERRED_CLASS: LazyLock<PyObject> = LazyLock::new(|| {
Python::with_gil(|py| {
py.import("twisted.internet.defer")
.expect("module 'twisted.internet.defer' should be importable")
.getattr("Deferred")
.expect("module 'twisted.internet.defer' should have a 'Deferred' class")
.unbind()
})
});
/// A reference to the twisted `reactor`.
static TWISTED_REACTOR: LazyLock<Py<PyModule>> = LazyLock::new(|| {
Python::with_gil(|py| {
py.import("twisted.internet.reactor")
.expect("module 'twisted.internet.reactor' should be importable")
.unbind()
})
});
/// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module: Bound<'_, PyModule> = PyModule::new(py, "http_client")?;
child_module.add_class::<HttpClient>()?;
// Make sure we fail early if we can't build the lazy statics.
LazyLock::force(&RUNTIME);
LazyLock::force(&DEFERRED_CLASS);
m.add_submodule(&child_module)?;
// We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import acl` work.
py.import("sys")?
.getattr("modules")?
.set_item("synapse.synapse_rust.http_client", child_module)?;
Ok(())
}
#[pyclass]
#[derive(Clone)]
struct HttpClient {
client: reqwest::Client,
}
#[pymethods]
impl HttpClient {
#[new]
pub fn py_new(user_agent: &str) -> PyResult<HttpClient> {
// The twisted reactor can only be imported after Synapse has been
// imported, to allow Synapse to change the twisted reactor. If we try
// and import the reactor too early twisted installs a default reactor,
// which can't be replaced.
LazyLock::force(&TWISTED_REACTOR);
Ok(HttpClient {
client: reqwest::Client::builder()
.user_agent(user_agent)
.build()
.context("building reqwest client")?,
})
}
pub fn get<'a>(
&self,
py: Python<'a>,
url: String,
response_limit: usize,
) -> PyResult<Bound<'a, PyAny>> {
self.send_request(py, self.client.get(url), response_limit)
}
pub fn post<'a>(
&self,
py: Python<'a>,
url: String,
response_limit: usize,
headers: HashMap<String, String>,
request_body: String,
) -> PyResult<Bound<'a, PyAny>> {
let mut builder = self.client.post(url);
for (name, value) in headers {
builder = builder.header(name, value);
}
builder = builder.body(request_body);
self.send_request(py, builder, response_limit)
}
}
impl HttpClient {
fn send_request<'a>(
&self,
py: Python<'a>,
builder: RequestBuilder,
response_limit: usize,
) -> PyResult<Bound<'a, PyAny>> {
create_deferred(py, async move {
let response = builder.send().await.context("sending request")?;
let status = response.status();
let mut stream = response.bytes_stream();
let mut buffer = Vec::new();
while let Some(chunk) = stream.try_next().await.context("reading body")? {
if buffer.len() + chunk.len() > response_limit {
Err(anyhow::anyhow!("Response size too large"))?;
}
buffer.extend_from_slice(&chunk);
}
if !status.is_success() {
return Err(HttpResponseException::new(status, buffer));
}
let r = Python::with_gil(|py| buffer.into_pyobject(py).map(|o| o.unbind()))?;
Ok(r)
})
}
}
/// Creates a twisted deferred from the given future, spawning the task on the
/// tokio runtime.
///
/// Does not handle deferred cancellation or contextvars.
fn create_deferred<F, O>(py: Python, fut: F) -> PyResult<Bound<'_, PyAny>>
where
F: Future<Output = PyResult<O>> + Send + 'static,
for<'a> O: IntoPyObject<'a>,
{
let deferred = DEFERRED_CLASS.bind(py).call0()?;
let deferred_callback = deferred.getattr("callback")?.unbind();
let deferred_errback = deferred.getattr("errback")?.unbind();
RUNTIME.spawn(async move {
// TODO: Is it safe to assert unwind safety here? I think so, as we
// don't use anything that could be tainted by the panic afterwards.
// Note that `.spawn(..)` asserts unwind safety on the future too.
let res = AssertUnwindSafe(fut).catch_unwind().await;
Python::with_gil(move |py| {
// Flatten the panic into standard python error
let res = match res {
Ok(r) => r,
Err(panic_err) => {
let panic_message = get_panic_message(&panic_err);
Err(PyException::new_err(
PyString::new(py, panic_message).unbind(),
))
}
};
// Send the result to the deferred, via `.callback(..)` or `.errback(..)`
match res {
Ok(obj) => {
TWISTED_REACTOR
.call_method(py, "callFromThread", (deferred_callback, obj), None)
.expect("callFromThread should not fail"); // There's nothing we can really do with errors here
}
Err(err) => {
TWISTED_REACTOR
.call_method(py, "callFromThread", (deferred_errback, err), None)
.expect("callFromThread should not fail"); // There's nothing we can really do with errors here
}
}
});
});
Ok(deferred)
}
/// Try and get the panic message out of the panic
fn get_panic_message<'a>(panic_err: &'a (dyn std::any::Any + Send + 'static)) -> &'a str {
// Apparently this is how you extract the panic message from a panic
if let Some(str_slice) = panic_err.downcast_ref::<&str>() {
str_slice
} else if let Some(string) = panic_err.downcast_ref::<String>() {
string
} else {
"unknown error"
}
}

View File

@ -27,7 +27,7 @@ pub enum IdentifierError {
impl fmt::Display for IdentifierError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{:?}", self)
write!(f, "{self:?}")
}
}

View File

@ -8,6 +8,7 @@ pub mod acl;
pub mod errors;
pub mod events;
pub mod http;
pub mod http_client;
pub mod identifier;
pub mod matrix_const;
pub mod push;
@ -50,6 +51,7 @@ fn synapse_rust(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
acl::register_module(py, m)?;
push::register_module(py, m)?;
events::register_module(py, m)?;
http_client::register_module(py, m)?;
rendezvous::register_module(py, m)?;
Ok(())

View File

@ -1,5 +1,5 @@
$schema: https://element-hq.github.io/synapse/latest/schema/v1/meta.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.132/synapse-config.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.133/synapse-config.schema.json
type: object
properties:
modules:
@ -912,6 +912,24 @@ properties:
default: null
examples:
- 24h
user_types:
type: object
description: >-
Configuration settings related to the user types feature.
properties:
default_user_type:
type: ["string", "null"]
description: "The default user type to use for registering new users when no value has been specified. Defaults to none."
default: null
extra_user_types:
type: array
description: "Array of additional user types to allow. These are treated as real users."
items:
type: string
default: []
examples:
- default_user_type: "custom"
extra_user_types: ["custom", "custom2"]
admin_contact:
type: ["string", "null"]
description: How to reach the server admin, used in `ResourceLimitError`.
@ -2167,6 +2185,23 @@ properties:
examples:
- per_second: 2.0
burst_count: 20.0
rc_reports:
$ref: "#/$defs/rc"
description: >-
Ratelimiting settings for reporting content.
This is a ratelimiting option that ratelimits reports made by users
about content they see.
Setting this to a high value allows users to report content quickly, possibly in
duplicate. This can result in higher database usage.
default:
per_user:
per_second: 1.0
burst_count: 5.0
examples:
- per_second: 2.0
burst_count: 20.0
federation_rr_transactions_per_room_per_second:
type: integer
description: >-

View File

@ -243,7 +243,7 @@ def do_lint() -> Set[str]:
importlib.import_module(module_info.name)
except ModelCheckerException as e:
logger.warning(
f"Bad annotation found when importing {module_info.name}"
"Bad annotation found when importing %s", module_info.name
)
failures.add(format_model_checker_exception(e))

View File

@ -139,3 +139,6 @@ cargo-fmt
# Ensure type hints are correct.
mypy
# Generate configuration documentation from the JSON Schema
./scripts-dev/gen_config_documentation.py schema/synapse-config.schema.yaml > docs/usage/configuration/config_documentation.md

View File

@ -37,7 +37,9 @@ from synapse.appservice import ApplicationService
from synapse.http import get_request_user_agent
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import trace
from synapse.state import CREATE_KEY, POWER_KEY
from synapse.types import Requester, create_requester
from synapse.types.state import StateFilter
from synapse.util.cancellation import cancellable
if TYPE_CHECKING:
@ -216,18 +218,20 @@ class BaseAuth:
# by checking if they would (theoretically) be able to change the
# m.room.canonical_alias events
power_level_event = (
await self._storage_controllers.state.get_current_state_event(
room_id, EventTypes.PowerLevels, ""
auth_events = await self._storage_controllers.state.get_current_state(
room_id,
StateFilter.from_types(
[
POWER_KEY,
CREATE_KEY,
]
),
)
)
auth_events = {}
if power_level_event:
auth_events[(EventTypes.PowerLevels, "")] = power_level_event
send_level = event_auth.get_send_level(
EventTypes.CanonicalAlias, "", power_level_event
EventTypes.CanonicalAlias,
"",
auth_events.get(POWER_KEY),
)
user_level = event_auth.get_user_power_level(
requester.user.to_string(), auth_events

View File

@ -30,9 +30,6 @@ from authlib.oauth2.rfc7662 import IntrospectionToken
from authlib.oidc.discovery import OpenIDProviderMetadata, get_well_known_url
from prometheus_client import Histogram
from twisted.web.client import readBody
from twisted.web.http_headers import Headers
from synapse.api.auth.base import BaseAuth
from synapse.api.errors import (
AuthError,
@ -43,8 +40,14 @@ from synapse.api.errors import (
UnrecognizedRequestError,
)
from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable
from synapse.logging.opentracing import active_span, force_tracing, start_active_span
from synapse.logging.context import PreserveLoggingContext
from synapse.logging.opentracing import (
active_span,
force_tracing,
inject_request_headers,
start_active_span,
)
from synapse.synapse_rust.http_client import HttpClient
from synapse.types import Requester, UserID, create_requester
from synapse.util import json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
@ -179,6 +182,10 @@ class MSC3861DelegatedAuth(BaseAuth):
self._admin_token: Callable[[], Optional[str]] = self._config.admin_token
self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users
self._rust_http_client = HttpClient(
user_agent=self._http_client.user_agent.decode("utf8")
)
# # Token Introspection Cache
# This remembers what users/devices are represented by which access tokens,
# in order to reduce overall system load:
@ -301,7 +308,6 @@ class MSC3861DelegatedAuth(BaseAuth):
introspection_endpoint = await self._introspection_endpoint()
raw_headers: Dict[str, str] = {
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": str(self._http_client.user_agent, "utf-8"),
"Accept": "application/json",
# Tell MAS that we support reading the device ID as an explicit
# value, not encoded in the scope. This is supported by MAS 0.15+
@ -315,38 +321,34 @@ class MSC3861DelegatedAuth(BaseAuth):
uri, raw_headers, body = self._client_auth.prepare(
method="POST", uri=introspection_endpoint, headers=raw_headers, body=body
)
headers = Headers({k: [v] for (k, v) in raw_headers.items()})
# Do the actual request
# We're not using the SimpleHttpClient util methods as we don't want to
# check the HTTP status code, and we do the body encoding ourselves.
logger.debug("Fetching token from MAS")
start_time = self._clock.time()
try:
response = await self._http_client.request(
method="POST",
uri=uri,
data=body.encode("utf-8"),
headers=headers,
with start_active_span("mas-introspect-token"):
inject_request_headers(raw_headers)
with PreserveLoggingContext():
resp_body = await self._rust_http_client.post(
url=uri,
response_limit=1 * 1024 * 1024,
headers=raw_headers,
request_body=body,
)
resp_body = await make_deferred_yieldable(readBody(response))
except HttpResponseException as e:
end_time = self._clock.time()
introspection_response_timer.labels(e.code).observe(end_time - start_time)
raise
except Exception:
end_time = self._clock.time()
introspection_response_timer.labels("ERR").observe(end_time - start_time)
raise
end_time = self._clock.time()
introspection_response_timer.labels(response.code).observe(
end_time - start_time
)
logger.debug("Fetched token from MAS")
if response.code < 200 or response.code >= 300:
raise HttpResponseException(
response.code,
response.phrase.decode("ascii", errors="replace"),
resp_body,
)
end_time = self._clock.time()
introspection_response_timer.labels(200).observe(end_time - start_time)
resp = json_decoder.decode(resp_body.decode("utf-8"))
@ -475,7 +477,7 @@ class MSC3861DelegatedAuth(BaseAuth):
# XXX: This is a temporary solution so that the admin API can be called by
# the OIDC provider. This will be removed once we have OIDC client
# credentials grant support in matrix-authentication-service.
logging.info("Admin toked used")
logger.info("Admin toked used")
# XXX: that user doesn't exist and won't be provisioned.
# This is mostly fine for admin calls, but we should also think about doing
# requesters without a user_id.

View File

@ -527,7 +527,11 @@ class InvalidCaptchaError(SynapseError):
class LimitExceededError(SynapseError):
"""A client has sent too many requests and is being throttled."""
"""A client has sent too many requests and is being throttled.
Args:
pause: Optional time in seconds to pause before responding to the client.
"""
def __init__(
self,
@ -535,6 +539,7 @@ class LimitExceededError(SynapseError):
code: int = 429,
retry_after_ms: Optional[int] = None,
errcode: str = Codes.LIMIT_EXCEEDED,
pause: Optional[float] = None,
):
# Use HTTP header Retry-After to enable library-assisted retry handling.
headers = (
@ -545,6 +550,7 @@ class LimitExceededError(SynapseError):
super().__init__(code, "Too Many Requests", errcode, headers=headers)
self.retry_after_ms = retry_after_ms
self.limiter_name = limiter_name
self.pause = pause
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
return cs_error(self.msg, self.errcode, retry_after_ms=self.retry_after_ms)

View File

@ -338,12 +338,10 @@ class Ratelimiter:
)
if not allowed:
if pause:
await self.clock.sleep(pause)
raise LimitExceededError(
limiter_name=self._limiter_name,
retry_after_ms=int(1000 * (time_allowed - time_now_s)),
pause=pause,
)

View File

@ -445,8 +445,8 @@ def listen_http(
# getHost() returns a UNIXAddress which contains an instance variable of 'name'
# encoded as a byte string. Decode as utf-8 so pretty.
logger.info(
"Synapse now listening on Unix Socket at: "
f"{ports[0].getHost().name.decode('utf-8')}"
"Synapse now listening on Unix Socket at: %s",
ports[0].getHost().name.decode("utf-8"),
)
return ports

View File

@ -28,15 +28,13 @@ from prometheus_client import Gauge
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.types import JsonDict
from synapse.util.constants import ONE_HOUR_SECONDS, ONE_MINUTE_SECONDS
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger("synapse.app.homeserver")
ONE_MINUTE_SECONDS = 60
ONE_HOUR_SECONDS = 60 * ONE_MINUTE_SECONDS
MILLISECONDS_PER_SECOND = 1000
INITIAL_DELAY_BEFORE_FIRST_PHONE_HOME_SECONDS = 5 * ONE_MINUTE_SECONDS
@ -173,7 +171,7 @@ async def phone_stats_home(
stats["log_level"] = logging.getLevelName(log_level)
logger.info(
"Reporting stats to %s: %s" % (hs.config.metrics.report_stats_endpoint, stats)
"Reporting stats to %s: %s", hs.config.metrics.report_stats_endpoint, stats
)
try:
await hs.get_proxied_http_client().put_json(

View File

@ -461,7 +461,7 @@ class _TransactionController:
recoverer = self.recoverers.get(service.id)
if not recoverer:
# No need to force a retry on a happy AS.
logger.info(f"{service.id} is not in recovery, not forcing retry")
logger.info("%s is not in recovery, not forcing retry", service.id)
return
recoverer.force_retry()

View File

@ -561,11 +561,17 @@ class ExperimentalConfig(Config):
# MSC4076: Add `disable_badge_count`` to pusher configuration
self.msc4076_enabled: bool = experimental.get("msc4076_enabled", False)
# MSC4235: Add `via` param to hierarchy endpoint
self.msc4235_enabled: bool = experimental.get("msc4235_enabled", False)
# MSC4263: Preventing MXID enumeration via key queries
self.msc4263_limit_key_queries_to_users_who_share_rooms = experimental.get(
"msc4263_limit_key_queries_to_users_who_share_rooms",
False,
)
# MSC4267: Automatically forgetting rooms on leave
self.msc4267_enabled: bool = experimental.get("msc4267_enabled", False)
# MSC4155: Invite filtering
self.msc4155_enabled: bool = experimental.get("msc4155_enabled", False)

View File

@ -51,6 +51,8 @@ if TYPE_CHECKING:
from synapse.config.homeserver import HomeServerConfig
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
DEFAULT_LOG_CONFIG = Template(
"""\
# Log configuration for Synapse.
@ -291,7 +293,7 @@ def _load_logging_config(log_config_path: str) -> None:
log_config = yaml.safe_load(f.read())
if not log_config:
logging.warning("Loaded a blank logging config?")
logger.warning("Loaded a blank logging config?")
# If the old structured logging configuration is being used, raise an error.
if "structured" in log_config and log_config.get("structured"):
@ -312,7 +314,7 @@ def _reload_logging_config(log_config_path: Optional[str]) -> None:
return
_load_logging_config(log_config_path)
logging.info("Reloaded log config from %s due to SIGHUP", log_config_path)
logger.info("Reloaded log config from %s due to SIGHUP", log_config_path)
def setup_logging(
@ -349,17 +351,17 @@ def setup_logging(
appbase.register_sighup(_reload_logging_config, log_config_path)
# Log immediately so we can grep backwards.
logging.warning("***** STARTING SERVER *****")
logging.warning(
logger.warning("***** STARTING SERVER *****")
logger.warning(
"Server %s version %s",
sys.argv[0],
SYNAPSE_VERSION,
)
logging.warning("Copyright (c) 2023 New Vector, Inc")
logging.warning(
logger.warning("Copyright (c) 2023 New Vector, Inc")
logger.warning(
"Licensed under the AGPL 3.0 license. Website: https://github.com/element-hq/synapse"
)
logging.info("Server hostname: %s", config.server.server_name)
logging.info("Public Base URL: %s", config.server.public_baseurl)
logging.info("Instance name: %s", hs.get_instance_name())
logging.info("Twisted reactor: %s", type(reactor).__name__)
logger.info("Server hostname: %s", config.server.server_name)
logger.info("Public Base URL: %s", config.server.public_baseurl)
logger.info("Instance name: %s", hs.get_instance_name())
logger.info("Twisted reactor: %s", type(reactor).__name__)

View File

@ -240,3 +240,9 @@ class RatelimitConfig(Config):
"rc_delayed_event_mgmt",
defaults={"per_second": 1, "burst_count": 5},
)
self.rc_reports = RatelimitSettings.parse(
config,
"rc_reports",
defaults={"per_second": 1, "burst_count": 5},
)

View File

@ -27,7 +27,7 @@ from synapse.types import JsonDict
from ._base import Config, ConfigError
logger = logging.Logger(__name__)
logger = logging.getLogger(__name__)
class RoomDefaultEncryptionTypes:
@ -85,4 +85,4 @@ class RoomConfig(Config):
# When enabled, users will forget rooms when they leave them, either via a
# leave, kick or ban.
self.forget_on_leave = config.get("forget_rooms_on_leave", False)
self.forget_on_leave: bool = config.get("forget_rooms_on_leave", False)

View File

@ -41,7 +41,7 @@ from synapse.util.stringutils import parse_and_validate_server_name
from ._base import Config, ConfigError
from ._util import validate_config
logger = logging.Logger(__name__)
logger = logging.getLogger(__name__)
DIRECT_TCP_ERROR = """
Using direct TCP replication for workers is no longer supported.

View File

@ -64,6 +64,7 @@ from synapse.api.room_versions import (
RoomVersion,
RoomVersions,
)
from synapse.state import CREATE_KEY
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
from synapse.types import (
MutableStateMap,
@ -308,6 +309,13 @@ def check_state_dependent_auth_rules(
auth_dict = {(e.type, e.state_key): e for e in auth_events}
# Later code relies on there being a create event e.g _can_federate, _is_membership_change_allowed
# so produce a more intelligible error if we don't have one.
if auth_dict.get(CREATE_KEY) is None:
raise AuthError(
403, f"Event {event.event_id} is missing a create event in auth_events."
)
# additional check for m.federate
creating_domain = get_domain_from_id(event.room_id)
originating_domain = get_domain_from_id(event.sender)
@ -1010,11 +1018,16 @@ def get_user_power_level(user_id: str, auth_events: StateMap["EventBase"]) -> in
user_id: user's id to look up in power_levels
auth_events:
state in force at this point in the room (or rather, a subset of
it including at least the create event and power levels event.
it including at least the create event, and possibly a power levels event).
Returns:
the user's power level in this room.
"""
create_event = auth_events.get(CREATE_KEY)
assert create_event is not None, (
"A create event in the auth events chain is required to calculate user power level correctly,"
" but was not found. This indicates a bug"
)
power_level_event = get_power_level_event(auth_events)
if power_level_event:
level = power_level_event.content.get("users", {}).get(user_id)
@ -1028,12 +1041,6 @@ def get_user_power_level(user_id: str, auth_events: StateMap["EventBase"]) -> in
else:
# if there is no power levels event, the creator gets 100 and everyone
# else gets 0.
# some things which call this don't pass the create event: hack around
# that.
key = (EventTypes.Create, "")
create_event = auth_events.get(key)
if create_event is not None:
if create_event.room_version.implicit_room_creator:
creator = create_event.sender
else:

View File

@ -195,15 +195,18 @@ class InviteAutoAccepter:
except SynapseError as e:
if e.code == HTTPStatus.FORBIDDEN:
logger.debug(
f"Update_room_membership was forbidden. This can sometimes be expected for remote invites. Exception: {e}"
"Update_room_membership was forbidden. This can sometimes be expected for remote invites. Exception: %s",
e,
)
else:
logger.warn(
f"Update_room_membership raised the following unexpected (SynapseError) exception: {e}"
logger.warning(
"Update_room_membership raised the following unexpected (SynapseError) exception: %s",
e,
)
except Exception as e:
logger.warn(
f"Update_room_membership raised the following unexpected exception: {e}"
logger.warning(
"Update_room_membership raised the following unexpected exception: %s",
e,
)
sleep = 2**retries

View File

@ -1818,7 +1818,7 @@ class FederationClient(FederationBase):
)
return timestamp_to_event_response
except SynapseError as e:
logger.warn(
logger.warning(
"timestamp_to_event(room_id=%s, timestamp=%s, direction=%s): encountered error when trying to fetch from destinations: %s",
room_id,
timestamp,

View File

@ -928,7 +928,8 @@ class FederationServer(FederationBase):
# joins) or the full state (for full joins).
# Return a 404 as we would if we weren't in the room at all.
logger.info(
f"Rejecting /send_{membership_type} to %s because it's a partial state room",
"Rejecting /send_%s to %s because it's a partial state room",
membership_type,
room_id,
)
raise SynapseError(

View File

@ -495,7 +495,7 @@ class AdminHandler:
)
except Exception as ex:
logger.info(
f"Redaction of event {event.event_id} failed due to: {ex}"
"Redaction of event %s failed due to: %s", event.event_id, ex
)
result["failed_redactions"][event.event_id] = str(ex)
await self._task_scheduler.update_task(task.id, result=result)

View File

@ -465,9 +465,7 @@ class ApplicationServicesHandler:
service, "read_receipt"
)
if new_token is not None and new_token.stream <= from_key:
logger.debug(
"Rejecting token lower than or equal to stored: %s" % (new_token,)
)
logger.debug("Rejecting token lower than or equal to stored: %s", new_token)
return []
from_token = MultiWriterStreamToken(stream=from_key)
@ -509,9 +507,7 @@ class ApplicationServicesHandler:
service, "presence"
)
if new_token is not None and new_token <= from_key:
logger.debug(
"Rejecting token lower than or equal to stored: %s" % (new_token,)
)
logger.debug("Rejecting token lower than or equal to stored: %s", new_token)
return []
for user in users:

View File

@ -76,7 +76,7 @@ from synapse.storage.databases.main.registration import (
LoginTokenLookupResult,
LoginTokenReused,
)
from synapse.types import JsonDict, Requester, UserID
from synapse.types import JsonDict, Requester, StrCollection, UserID
from synapse.util import stringutils as stringutils
from synapse.util.async_helpers import delay_cancellation, maybe_awaitable
from synapse.util.msisdn import phone_number_to_msisdn
@ -1547,6 +1547,31 @@ class AuthHandler:
user_id, (token_id for _, token_id, _ in tokens_and_devices)
)
async def delete_access_tokens_for_devices(
self,
user_id: str,
device_ids: StrCollection,
) -> None:
"""Invalidate access tokens for the devices
Args:
user_id: ID of user the tokens belong to
device_ids: ID of device the tokens are associated with.
If None, tokens associated with any device (or no device) will
be deleted
"""
tokens_and_devices = await self.store.user_delete_access_tokens_for_devices(
user_id,
device_ids,
)
# see if any modules want to know about this
if self.password_auth_provider.on_logged_out_callbacks:
for token, _, device_id in tokens_and_devices:
await self.password_auth_provider.on_logged_out(
user_id=user_id, device_id=device_id, access_token=token
)
async def add_threepid(
self, user_id: str, medium: str, address: str, validated_at: int
) -> None:
@ -1895,7 +1920,7 @@ def load_single_legacy_password_auth_provider(
try:
provider = module(config=config, account_handler=api)
except Exception as e:
logger.error("Error while initializing %r: %s", module, e)
logger.exception("Error while initializing %r: %s", module, e)
raise
# All methods that the module provides should be async, but this wasn't enforced
@ -2428,7 +2453,7 @@ class PasswordAuthProvider:
except CancelledError:
raise
except Exception as e:
logger.error("Module raised an exception in is_3pid_allowed: %s", e)
logger.exception("Module raised an exception in is_3pid_allowed: %s", e)
raise SynapseError(code=500, msg="Internal Server Error")
return True

View File

@ -96,6 +96,14 @@ class DeactivateAccountHandler:
403, "Deactivation of this user is forbidden", Codes.FORBIDDEN
)
logger.info(
"%s requested deactivation of %s erase_data=%s id_server=%s",
requester.user,
user_id,
erase_data,
id_server,
)
# FIXME: Theoretically there is a race here wherein user resets
# password using threepid.

View File

@ -671,12 +671,12 @@ class DeviceHandler(DeviceWorkerHandler):
except_device_id: optional device id which should not be deleted
"""
device_map = await self.store.get_devices_by_user(user_id)
device_ids = list(device_map)
if except_device_id is not None:
device_ids = [d for d in device_ids if d != except_device_id]
await self.delete_devices(user_id, device_ids)
device_map.pop(except_device_id, None)
user_device_ids = device_map.keys()
await self.delete_devices(user_id, user_device_ids)
async def delete_devices(self, user_id: str, device_ids: List[str]) -> None:
async def delete_devices(self, user_id: str, device_ids: StrCollection) -> None:
"""Delete several devices
Args:
@ -695,17 +695,10 @@ class DeviceHandler(DeviceWorkerHandler):
else:
raise
# Delete data specific to each device. Not optimised as it is not
# considered as part of a critical path.
for device_id in device_ids:
await self._auth_handler.delete_access_tokens_for_user(
user_id, device_id=device_id
)
await self.store.delete_e2e_keys_by_device(
user_id=user_id, device_id=device_id
)
# Delete data specific to each device. Not optimised as its an
# experimental MSC.
if self.hs.config.experimental.msc3890_enabled:
for device_id in device_ids:
# Remove any local notification settings for this device in accordance
# with MSC3890.
await self._account_data_handler.remove_account_data_for_user(
@ -713,6 +706,13 @@ class DeviceHandler(DeviceWorkerHandler):
f"org.matrix.msc3890.local_notification_settings.{device_id}",
)
# If we're deleting a lot of devices, a bunch of them may not have any
# to-device messages queued up. We filter those out to avoid scheduling
# unnecessary tasks.
devices_with_messages = await self.store.get_devices_with_messages(
user_id, device_ids
)
for device_id in devices_with_messages:
# Delete device messages asynchronously and in batches using the task scheduler
# We specify an upper stream id to avoid deleting non delivered messages
# if an user re-uses a device ID.
@ -726,6 +726,10 @@ class DeviceHandler(DeviceWorkerHandler):
},
)
await self._auth_handler.delete_access_tokens_for_devices(
user_id, device_ids=device_ids
)
# Pushers are deleted after `delete_access_tokens_for_user` is called so that
# modules using `on_logged_out` hook can use them if needed.
await self.hs.get_pusherpool().remove_pushers_by_devices(user_id, device_ids)
@ -819,6 +823,7 @@ class DeviceHandler(DeviceWorkerHandler):
# This should only happen if there are no updates, so we bail.
return
if logger.isEnabledFor(logging.DEBUG):
for device_id in device_ids:
logger.debug(
"Notifying about update %r/%r, ID: %r", user_id, device_id, position
@ -922,9 +927,6 @@ class DeviceHandler(DeviceWorkerHandler):
# can't call self.delete_device because that will clobber the
# access token so call the storage layer directly
await self.store.delete_devices(user_id, [old_device_id])
await self.store.delete_e2e_keys_by_device(
user_id=user_id, device_id=old_device_id
)
# tell everyone that the old device is gone and that the dehydrated
# device has a new display name
@ -946,7 +948,6 @@ class DeviceHandler(DeviceWorkerHandler):
raise errors.NotFoundError()
await self.delete_devices(user_id, [device_id])
await self.store.delete_e2e_keys_by_device(user_id=user_id, device_id=device_id)
@wrap_as_background_process("_handle_new_device_update_async")
async def _handle_new_device_update_async(self) -> None:
@ -1600,7 +1601,7 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
if prev_stream_id is not None and cached_devices == {
d["device_id"]: d for d in devices
}:
logging.info(
logger.info(
"Skipping device list resync for %s, as our cache matches already",
user_id,
)

View File

@ -282,7 +282,7 @@ class DirectoryHandler:
except RequestSendFailed:
raise SynapseError(502, "Failed to fetch alias")
except CodeMessageException as e:
logging.warning(
logger.warning(
"Error retrieving alias %s -> %s %s", room_alias, e.code, e.msg
)
if e.code == 404:

View File

@ -1062,8 +1062,8 @@ class FederationHandler:
if self.hs.config.server.block_non_admin_invites:
raise SynapseError(403, "This server does not accept room invites")
spam_check = await self._spam_checker_module_callbacks.user_may_invite(
event.sender, event.state_key, event.room_id
spam_check = (
await self._spam_checker_module_callbacks.federated_user_may_invite(event)
)
if spam_check != NOT_SPAM:
raise SynapseError(
@ -1095,7 +1095,9 @@ class FederationHandler:
rule = invite_config.get_invite_rule(event.sender)
if rule == InviteRule.BLOCK:
logger.info(
f"Automatically rejecting invite from {event.sender} due to the invite filtering rules of {event.state_key}"
"Automatically rejecting invite from %s due to the invite filtering rules of %s",
event.sender,
event.state_key,
)
raise SynapseError(
403,

View File

@ -218,7 +218,7 @@ class IdentityHandler:
return data
except HttpResponseException as e:
logger.error("3PID bind failed with Matrix error: %r", e)
logger.exception("3PID bind failed with Matrix error: %r", e)
raise e.to_synapse_error()
except RequestTimedOutError:
raise SynapseError(500, "Timed out contacting identity server")
@ -323,7 +323,7 @@ class IdentityHandler:
# The remote server probably doesn't support unbinding (yet)
logger.warning("Received %d response while unbinding threepid", e.code)
else:
logger.error("Failed to unbind threepid on identity server: %s", e)
logger.exception("Failed to unbind threepid on identity server: %s", e)
raise SynapseError(500, "Failed to contact identity server")
except RequestTimedOutError:
raise SynapseError(500, "Timed out contacting identity server")

View File

@ -460,7 +460,7 @@ class MessageHandler:
# date from the database in the same database transaction.
await self.store.expire_event(event_id)
except Exception as e:
logger.error("Could not expire event %s: %r", event_id, e)
logger.exception("Could not expire event %s: %r", event_id, e)
# Schedule the expiry of the next event to expire.
await self._schedule_next_expiry()
@ -2061,7 +2061,8 @@ class EventCreationHandler:
# dependent on _DUMMY_EVENT_ROOM_EXCLUSION_EXPIRY
logger.info(
"Failed to send dummy event into room %s. Will exclude it from "
"future attempts until cache expires" % (room_id,)
"future attempts until cache expires",
room_id,
)
now = self.clock.time_msec()
self._rooms_to_exclude_from_dummy_event_insertion[room_id] = now
@ -2120,7 +2121,9 @@ class EventCreationHandler:
except AuthError:
logger.info(
"Failed to send dummy event into room %s for user %s due to "
"lack of power. Will try another user" % (room_id, user_id)
"lack of power. Will try another user",
room_id,
user_id,
)
return False

View File

@ -563,12 +563,13 @@ class OidcProvider:
raise ValueError("Unexpected subject")
except Exception:
logger.warning(
f"OIDC Back-Channel Logout is enabled for issuer {self.issuer!r} "
"OIDC Back-Channel Logout is enabled for issuer %r "
"but it looks like the configured `user_mapping_provider` "
"does not use the `sub` claim as subject. If it is the case, "
"and you want Synapse to ignore the `sub` claim in OIDC "
"Back-Channel Logouts, set `backchannel_logout_ignore_sub` "
"to `true` in the issuer config."
"to `true` in the issuer config.",
self.issuer,
)
@property
@ -826,10 +827,10 @@ class OidcProvider:
if response.code < 400:
logger.debug(
"Invalid response from the authorization server: "
'responded with a "{status}" '
"but body has an error field: {error!r}".format(
status=status, error=resp["error"]
)
'responded with a "%s" '
"but body has an error field: %r",
status,
resp["error"],
)
description = resp.get("error_description", error)
@ -1385,7 +1386,8 @@ class OidcProvider:
# support dynamic registration in Synapse at some point.
if not self._config.backchannel_logout_enabled:
logger.warning(
f"Received an OIDC Back-Channel Logout request from issuer {self.issuer!r} but it is disabled in config"
"Received an OIDC Back-Channel Logout request from issuer %r but it is disabled in config",
self.issuer,
)
# TODO: this responds with a 400 status code, which is what the OIDC
@ -1797,5 +1799,5 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
extras[key] = template.render(user=userinfo).strip()
except Exception as e:
# Log an error and skip this value (don't break login for this).
logger.error("Failed to render OIDC extra attribute %s: %s" % (key, e))
logger.exception("Failed to render OIDC extra attribute %s: %s", key, e)
return extras

View File

@ -506,7 +506,7 @@ class RegistrationHandler:
ratelimit=False,
)
except Exception as e:
logger.error("Failed to join new user to %r: %r", r, e)
logger.exception("Failed to join new user to %r: %r", r, e)
async def _join_rooms(self, user_id: str) -> None:
"""
@ -596,7 +596,7 @@ class RegistrationHandler:
# moving away from bare excepts is a good thing to do.
logger.error("Failed to join new user to %r: %r", r, e)
except Exception as e:
logger.error("Failed to join new user to %r: %r", r, e, exc_info=True)
logger.exception("Failed to join new user to %r: %r", r, e)
async def _auto_join_rooms(self, user_id: str) -> None:
"""Automatically joins users to auto join rooms - creating the room in the first place

View File

@ -0,0 +1,98 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
#
import logging
from http import HTTPStatus
from typing import TYPE_CHECKING
from synapse.api.errors import Codes, SynapseError
from synapse.api.ratelimiting import Ratelimiter
from synapse.types import (
Requester,
)
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class ReportsHandler:
def __init__(self, hs: "HomeServer"):
self._hs = hs
self._store = hs.get_datastores().main
self._clock = hs.get_clock()
# Ratelimiter for management of existing delayed events,
# keyed by the requesting user ID.
self._reports_ratelimiter = Ratelimiter(
store=self._store,
clock=self._clock,
cfg=hs.config.ratelimiting.rc_reports,
)
async def report_user(
self, requester: Requester, target_user_id: str, reason: str
) -> None:
"""Files a report against a user from a user.
Rate and size limits are applied to the report. If the user being reported
does not belong to this server, the report is ignored. This check is done
after the limits to reduce DoS potential.
If the user being reported belongs to this server, but doesn't exist, we
similarly ignore the report. The spec allows us to return an error if we
want to, but we choose to hide that user's existence instead.
If the report is otherwise valid (for a user which exists on our server),
we append it to the database for later processing.
Args:
requester - The user filing the report.
target_user_id - The user being reported.
reason - The user-supplied reason the user is being reported.
Raises:
SynapseError for BAD_REQUEST/BAD_JSON if the reason is too long.
"""
await self._check_limits(requester)
if len(reason) > 1000:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Reason must be less than 1000 characters",
Codes.BAD_JSON,
)
if not self._hs.is_mine_id(target_user_id):
return # hide that they're not ours/that we can't do anything about them
user = await self._store.get_user_by_id(target_user_id)
if user is None:
return # hide that they don't exist
await self._store.add_user_report(
target_user_id=target_user_id,
user_id=requester.user.to_string(),
reason=reason,
received_ts=self._clock.time_msec(),
)
async def _check_limits(self, requester: Requester) -> None:
await self._reports_ratelimiter.ratelimit(
requester,
requester.user.to_string(),
)

View File

@ -698,7 +698,7 @@ class RoomCreationHandler:
except SynapseError as e:
# again I'm not really expecting this to fail, but if it does, I'd rather
# we returned the new room to the client at this point.
logger.error("Unable to send updated alias events in old room: %s", e)
logger.exception("Unable to send updated alias events in old room: %s", e)
try:
await self.event_creation_handler.create_and_send_nonmember_event(
@ -715,7 +715,7 @@ class RoomCreationHandler:
except SynapseError as e:
# again I'm not really expecting this to fail, but if it does, I'd rather
# we returned the new room to the client at this point.
logger.error("Unable to send updated alias events in new room: %s", e)
logger.exception("Unable to send updated alias events in new room: %s", e)
async def create_room(
self,

View File

@ -922,7 +922,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
rule = invite_config.get_invite_rule(requester.user.to_string())
if rule == InviteRule.BLOCK:
logger.info(
f"Automatically rejecting invite from {target_id} due to the the invite filtering rules of {requester.user}"
"Automatically rejecting invite from %s due to the the invite filtering rules of %s",
target_id,
requester.user,
)
raise SynapseError(
403,
@ -1570,7 +1572,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
require_consent=False,
)
except Exception as e:
logger.exception("Error kicking guest user: %s" % (e,))
logger.exception("Error kicking guest user: %s", e)
async def lookup_room_alias(
self, room_alias: RoomAlias

View File

@ -54,6 +54,9 @@ class RoomPolicyHandler:
Returns:
bool: True if the event is allowed in the room, False otherwise.
"""
if event.type == "org.matrix.msc4284.policy" and event.state_key is not None:
return True # always allow policy server change events
policy_event = await self._storage_controllers.state.get_current_state_event(
event.room_id, "org.matrix.msc4284.policy", ""
)

View File

@ -111,7 +111,15 @@ class RoomSummaryHandler:
# If a user tries to fetch the same page multiple times in quick succession,
# only process the first attempt and return its result to subsequent requests.
self._pagination_response_cache: ResponseCache[
Tuple[str, str, bool, Optional[int], Optional[int], Optional[str]]
Tuple[
str,
str,
bool,
Optional[int],
Optional[int],
Optional[str],
Optional[Tuple[str, ...]],
]
] = ResponseCache(
hs.get_clock(),
"get_room_hierarchy",
@ -126,6 +134,7 @@ class RoomSummaryHandler:
max_depth: Optional[int] = None,
limit: Optional[int] = None,
from_token: Optional[str] = None,
remote_room_hosts: Optional[Tuple[str, ...]] = None,
) -> JsonDict:
"""
Implementation of the room hierarchy C-S API.
@ -143,6 +152,9 @@ class RoomSummaryHandler:
limit: An optional limit on the number of rooms to return per
page. Must be a positive integer.
from_token: An optional pagination token.
remote_room_hosts: An optional list of remote homeserver server names. If defined,
each host will be used to try and fetch the room hierarchy. Must be a tuple so
that it can be hashed by the `RoomSummaryHandler._pagination_response_cache`.
Returns:
The JSON hierarchy dictionary.
@ -162,6 +174,7 @@ class RoomSummaryHandler:
max_depth,
limit,
from_token,
remote_room_hosts,
),
self._get_room_hierarchy,
requester.user.to_string(),
@ -170,6 +183,7 @@ class RoomSummaryHandler:
max_depth,
limit,
from_token,
remote_room_hosts,
)
async def _get_room_hierarchy(
@ -180,6 +194,7 @@ class RoomSummaryHandler:
max_depth: Optional[int] = None,
limit: Optional[int] = None,
from_token: Optional[str] = None,
remote_room_hosts: Optional[Tuple[str, ...]] = None,
) -> JsonDict:
"""See docstring for SpaceSummaryHandler.get_room_hierarchy."""
@ -199,7 +214,7 @@ class RoomSummaryHandler:
if not local_room:
room_hierarchy = await self._summarize_remote_room_hierarchy(
_RoomQueueEntry(requested_room_id, ()),
_RoomQueueEntry(requested_room_id, remote_room_hosts or ()),
False,
)
root_room_entry = room_hierarchy[0]
@ -240,7 +255,7 @@ class RoomSummaryHandler:
processed_rooms = set(pagination_session["processed_rooms"])
else:
# The queue of rooms to process, the next room is last on the stack.
room_queue = [_RoomQueueEntry(requested_room_id, ())]
room_queue = [_RoomQueueEntry(requested_room_id, remote_room_hosts or ())]
# Rooms we have already processed.
processed_rooms = set()

View File

@ -124,7 +124,7 @@ class SamlHandler:
)
# Since SAML sessions timeout it is useful to log when they were created.
logger.info("Initiating a new SAML session: %s" % (reqid,))
logger.info("Initiating a new SAML session: %s", reqid)
now = self.clock.time_msec()
self._outstanding_requests_dict[reqid] = Saml2SessionData(

View File

@ -238,7 +238,7 @@ class SendEmailHandler:
multipart_msg.attach(text_part)
multipart_msg.attach(html_part)
logger.info("Sending email to %s" % email_address)
logger.info("Sending email to %s", email_address)
await self._sendmail(
self._reactor,

View File

@ -23,6 +23,7 @@ from typing import (
List,
Literal,
Mapping,
MutableMapping,
Optional,
Set,
Tuple,
@ -73,6 +74,7 @@ from synapse.types.handlers.sliding_sync import (
SlidingSyncResult,
)
from synapse.types.state import StateFilter
from synapse.util import MutableOverlayMapping
if TYPE_CHECKING:
from synapse.server import HomeServer
@ -245,11 +247,13 @@ class SlidingSyncRoomLists:
# Note: this won't include rooms the user has left themselves. We add back
# `newly_left` rooms below. This is more efficient than fetching all rooms and
# then filtering out the old left rooms.
room_membership_for_user_map = (
room_membership_for_user_map: MutableMapping[str, RoomsForUserSlidingSync] = (
MutableOverlayMapping(
await self.store.get_sliding_sync_rooms_for_user_from_membership_snapshots(
user_id
)
)
)
# To play nice with the rewind logic below, we need to go fetch the rooms the
# user has left themselves but only if it changed after the `to_token`.
#
@ -268,26 +272,12 @@ class SlidingSyncRoomLists:
)
)
if self_leave_room_membership_for_user_map:
# FIXME: It would be nice to avoid this copy but since
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
# can't return a mutable value like a `dict`. We make the copy to get a
# mutable dict that we can change. We try to only make a copy when necessary
# (if we actually need to change something) as in most cases, the logic
# doesn't need to run.
room_membership_for_user_map = dict(room_membership_for_user_map)
room_membership_for_user_map.update(self_leave_room_membership_for_user_map)
# Remove invites from ignored users
ignored_users = await self.store.ignored_users(user_id)
invite_config = await self.store.get_invite_config_for_user(user_id)
if ignored_users:
# FIXME: It would be nice to avoid this copy but since
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
# can't return a mutable value like a `dict`. We make the copy to get a
# mutable dict that we can change. We try to only make a copy when necessary
# (if we actually need to change something) as in most cases, the logic
# doesn't need to run.
room_membership_for_user_map = dict(room_membership_for_user_map)
# Make a copy so we don't run into an error: `dictionary changed size during
# iteration`, when we remove items
for room_id in list(room_membership_for_user_map.keys()):
@ -316,13 +306,6 @@ class SlidingSyncRoomLists:
sync_config.user, room_membership_for_user_map, to_token=to_token
)
if changes:
# FIXME: It would be nice to avoid this copy but since
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
# can't return a mutable value like a `dict`. We make the copy to get a
# mutable dict that we can change. We try to only make a copy when necessary
# (if we actually need to change something) as in most cases, the logic
# doesn't need to run.
room_membership_for_user_map = dict(room_membership_for_user_map)
for room_id, change in changes.items():
if change is None:
# Remove rooms that the user joined after the `to_token`
@ -364,13 +347,6 @@ class SlidingSyncRoomLists:
newly_left_room_map.keys() - room_membership_for_user_map.keys()
)
if missing_newly_left_rooms:
# FIXME: It would be nice to avoid this copy but since
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
# can't return a mutable value like a `dict`. We make the copy to get a
# mutable dict that we can change. We try to only make a copy when necessary
# (if we actually need to change something) as in most cases, the logic
# doesn't need to run.
room_membership_for_user_map = dict(room_membership_for_user_map)
for room_id in missing_newly_left_rooms:
newly_left_room_for_user = newly_left_room_map[room_id]
# This should be a given
@ -461,6 +437,10 @@ class SlidingSyncRoomLists:
else:
room_membership_for_user_map.pop(room_id, None)
# Remove any rooms that we globally exclude from sync.
for room_id in self.rooms_to_exclude_globally:
room_membership_for_user_map.pop(room_id, None)
dm_room_ids = await self._get_dm_rooms_for_user(user_id)
if sync_config.lists:
@ -577,14 +557,6 @@ class SlidingSyncRoomLists:
if sync_config.room_subscriptions:
with start_active_span("assemble_room_subscriptions"):
# FIXME: It would be nice to avoid this copy but since
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
# can't return a mutable value like a `dict`. We make the copy to get a
# mutable dict that we can change. We try to only make a copy when necessary
# (if we actually need to change something) as in most cases, the logic
# doesn't need to run.
room_membership_for_user_map = dict(room_membership_for_user_map)
# Find which rooms are partially stated and may need to be filtered out
# depending on the `required_state` requested (see below).
partial_state_rooms = await self.store.get_partial_rooms()

View File

@ -1230,12 +1230,16 @@ class SsoHandler:
if expected_user_id is not None and user_id != expected_user_id:
logger.error(
"Received a logout notification from SSO provider "
f"{auth_provider_id!r} for the user {expected_user_id!r}, but with "
f"a session ID ({auth_provider_session_id!r}) which belongs to "
f"{user_id!r}. This may happen when the SSO provider user mapper "
"%r for the user %r, but with "
"a session ID (%r) which belongs to "
"%r. This may happen when the SSO provider user mapper "
"uses something else than the standard attribute as mapping ID. "
"For OIDC providers, set `backchannel_logout_ignore_sub` to `true` "
"in the provider config if that is the case."
"in the provider config if that is the case.",
auth_provider_id,
expected_user_id,
auth_provider_session_id,
user_id,
)
continue

View File

@ -3074,8 +3074,10 @@ class SyncHandler:
if batch.limited and since_token:
user_id = sync_result_builder.sync_config.user.to_string()
logger.debug(
"Incremental gappy sync of %s for user %s with %d state events"
% (room_id, user_id, len(state))
"Incremental gappy sync of %s for user %s with %d state events",
room_id,
user_id,
len(state),
)
elif room_builder.rtype == "archived":
archived_room_sync = ArchivedSyncResult(

View File

@ -749,10 +749,9 @@ class UserDirectoryHandler(StateDeltasHandler):
)
continue
except Exception:
logger.error(
logger.exception(
"Failed to refresh profile for %r due to unhandled exception",
user_id,
exc_info=True,
)
await self.store.set_remote_user_profile_in_user_dir_stale(
user_id,

View File

@ -44,12 +44,15 @@ from synapse.logging.opentracing import start_active_span
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.storage.databases.main.lock import Lock, LockStore
from synapse.util.async_helpers import timeout_deferred
from synapse.util.constants import ONE_MINUTE_SECONDS
if TYPE_CHECKING:
from synapse.logging.opentracing import opentracing
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
# This lock is used to avoid creating an event while we are purging the room.
# We take a read lock when creating an event, and a write one when purging a room.
# This is because it is fine to create several events concurrently, since referenced events
@ -270,9 +273,10 @@ class WaitingLock:
def _get_next_retry_interval(self) -> float:
next = self._retry_interval
self._retry_interval = max(5, next * 2)
if self._retry_interval > 5 * 2 ^ 7: # ~10 minutes
logging.warning(
f"Lock timeout is getting excessive: {self._retry_interval}s. There may be a deadlock."
if self._retry_interval > 10 * ONE_MINUTE_SECONDS: # >7 iterations
logger.warning(
"Lock timeout is getting excessive: %ss. There may be a deadlock.",
self._retry_interval,
)
return next * random.uniform(0.9, 1.1)
@ -349,8 +353,9 @@ class WaitingMultiLock:
def _get_next_retry_interval(self) -> float:
next = self._retry_interval
self._retry_interval = max(5, next * 2)
if self._retry_interval > 5 * 2 ^ 7: # ~10 minutes
logging.warning(
f"Lock timeout is getting excessive: {self._retry_interval}s. There may be a deadlock."
if self._retry_interval > 10 * ONE_MINUTE_SECONDS: # >7 iterations
logger.warning(
"Lock timeout is getting excessive: %ss. There may be a deadlock.",
self._retry_interval,
)
return next * random.uniform(0.9, 1.1)

View File

@ -53,7 +53,7 @@ class AdditionalResource(DirectServeJsonResource):
hs: homeserver
handler: function to be called to handle the request.
"""
super().__init__()
super().__init__(clock=hs.get_clock())
self._handler = handler
async def _async_render(self, request: Request) -> Optional[Tuple[int, Any]]:

View File

@ -213,7 +213,7 @@ class _IPBlockingResolver:
if _is_ip_blocked(ip_address, self._ip_allowlist, self._ip_blocklist):
logger.info(
"Blocked %s from DNS resolution to %s" % (ip_address, hostname)
"Blocked %s from DNS resolution to %s", ip_address, hostname
)
has_bad_ip = True
@ -318,7 +318,7 @@ class BlocklistingAgentWrapper(Agent):
pass
else:
if _is_ip_blocked(ip_address, self._ip_allowlist, self._ip_blocklist):
logger.info("Blocking access to %s" % (ip_address,))
logger.info("Blocking access to %s", ip_address)
e = SynapseError(HTTPStatus.FORBIDDEN, "IP address blocked")
return defer.fail(Failure(e))
@ -723,7 +723,7 @@ class BaseHttpClient:
resp_headers = dict(response.headers.getAllRawHeaders())
if response.code > 299:
logger.warning("Got %d when downloading %s" % (response.code, url))
logger.warning("Got %d when downloading %s", response.code, url)
raise SynapseError(
HTTPStatus.BAD_GATEWAY, "Got error %d" % (response.code,), Codes.UNKNOWN
)
@ -1106,7 +1106,7 @@ class _MultipartParserProtocol(protocol.Protocol):
self.stream.write(data[start:end])
except Exception as e:
logger.warning(
f"Exception encountered writing file data to stream: {e}"
"Exception encountered writing file data to stream: %s", e
)
self.deferred.errback()
self.file_length += end - start
@ -1129,7 +1129,7 @@ class _MultipartParserProtocol(protocol.Protocol):
try:
self.parser.write(incoming_data)
except Exception as e:
logger.warning(f"Exception writing to multipart parser: {e}")
logger.warning("Exception writing to multipart parser: %s", e)
self.deferred.errback()
return

View File

@ -602,7 +602,7 @@ class MatrixFederationHttpClient:
try:
parse_and_validate_server_name(request.destination)
except ValueError:
logger.exception(f"Invalid destination: {request.destination}.")
logger.exception("Invalid destination: %s.", request.destination)
raise FederationDeniedError(request.destination)
if timeout is not None:

View File

@ -106,7 +106,7 @@ class ProxyResource(_AsyncResource):
isLeaf = True
def __init__(self, reactor: ISynapseReactor, hs: "HomeServer"):
super().__init__(True)
super().__init__(hs.get_clock(), True)
self.reactor = reactor
self.agent = hs.get_federation_http_client().agent

View File

@ -21,7 +21,7 @@
import logging
import random
import re
from typing import Any, Collection, Dict, List, Optional, Sequence, Tuple, Union
from typing import Any, Collection, Dict, List, Optional, Sequence, Tuple, Union, cast
from urllib.parse import urlparse
from urllib.request import ( # type: ignore[attr-defined]
getproxies_environment,
@ -40,6 +40,7 @@ from twisted.internet.interfaces import (
IProtocol,
IProtocolFactory,
IReactorCore,
IReactorTime,
IStreamClientEndpoint,
)
from twisted.python.failure import Failure
@ -129,7 +130,9 @@ class ProxyAgent(_AgentBase):
):
contextFactory = contextFactory or BrowserLikePolicyForHTTPS()
_AgentBase.__init__(self, reactor, pool)
# `_AgentBase` expects an `IReactorTime` provider. `IReactorCore`
# extends `IReactorTime`, so this cast is safe.
_AgentBase.__init__(self, cast(IReactorTime, reactor), pool)
if proxy_reactor is None:
self.proxy_reactor = reactor
@ -168,7 +171,7 @@ class ProxyAgent(_AgentBase):
self.no_proxy = no_proxy
self._policy_for_https = contextFactory
self._reactor = reactor
self._reactor = cast(IReactorTime, reactor)
self._federation_proxy_endpoint: Optional[IStreamClientEndpoint] = None
self._federation_proxy_credentials: Optional[ProxyCredentials] = None
@ -257,7 +260,11 @@ class ProxyAgent(_AgentBase):
raise ValueError(f"Invalid URI {uri!r}")
parsed_uri = URI.fromBytes(uri)
pool_key = f"{parsed_uri.scheme!r}{parsed_uri.host!r}{parsed_uri.port}"
pool_key: tuple[bytes, bytes, int] = (
parsed_uri.scheme,
parsed_uri.host,
parsed_uri.port,
)
request_path = parsed_uri.originForm
should_skip_proxy = False
@ -283,7 +290,7 @@ class ProxyAgent(_AgentBase):
)
# Cache *all* connections under the same key, since we are only
# connecting to a single destination, the proxy:
pool_key = "http-proxy"
pool_key = (b"http-proxy", b"", 0)
endpoint = self.http_proxy_endpoint
request_path = uri
elif (

View File

@ -180,9 +180,16 @@ class ReplicationAgent(_AgentBase):
worker_name = parsedURI.netloc.decode("utf-8")
key_scheme = self._endpointFactory.instance_map[worker_name].scheme()
key_netloc = self._endpointFactory.instance_map[worker_name].netloc()
# This sets the Pool key to be:
# (http(s), <host:port>) or (unix, <socket_path>)
key = (key_scheme, key_netloc)
# Build a connection pool key.
#
# `_AgentBase` expects this to be a three-tuple of `(scheme, host,
# port)` of type `bytes`. We don't have a real port when connecting via
# a Unix socket, so use `0`.
key = (
key_scheme.encode("ascii"),
key_netloc.encode("utf-8"),
0,
)
# _requestWithEndpoint comes from _AgentBase class
return self._requestWithEndpoint(

View File

@ -42,6 +42,7 @@ from typing import (
Protocol,
Tuple,
Union,
cast,
)
import attr
@ -49,8 +50,9 @@ import jinja2
from canonicaljson import encode_canonical_json
from zope.interface import implementer
from twisted.internet import defer, interfaces
from twisted.internet import defer, interfaces, reactor
from twisted.internet.defer import CancelledError
from twisted.internet.interfaces import IReactorTime
from twisted.python import failure
from twisted.web import resource
@ -67,6 +69,7 @@ from twisted.web.util import redirectTo
from synapse.api.errors import (
CodeMessageException,
Codes,
LimitExceededError,
RedirectException,
SynapseError,
UnrecognizedRequestError,
@ -74,7 +77,7 @@ from synapse.api.errors import (
from synapse.config.homeserver import HomeServerConfig
from synapse.logging.context import defer_to_thread, preserve_fn, run_in_background
from synapse.logging.opentracing import active_span, start_active_span, trace_servlet
from synapse.util import json_encoder
from synapse.util import Clock, json_encoder
from synapse.util.caches import intern_dict
from synapse.util.cancellation import is_function_cancellable
from synapse.util.iterutils import chunk_seq
@ -308,9 +311,10 @@ class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
context from the request the servlet is handling.
"""
def __init__(self, extract_context: bool = False):
def __init__(self, clock: Clock, extract_context: bool = False):
super().__init__()
self._clock = clock
self._extract_context = extract_context
def render(self, request: "SynapseRequest") -> int:
@ -329,7 +333,12 @@ class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
request.request_metrics.name = self.__class__.__name__
with trace_servlet(request, self._extract_context):
try:
callback_return = await self._async_render(request)
except LimitExceededError as e:
if e.pause:
self._clock.sleep(e.pause)
raise
if callback_return is not None:
code, response = callback_return
@ -393,8 +402,17 @@ class DirectServeJsonResource(_AsyncResource):
formatting responses and errors as JSON.
"""
def __init__(self, canonical_json: bool = False, extract_context: bool = False):
super().__init__(extract_context)
def __init__(
self,
canonical_json: bool = False,
extract_context: bool = False,
# Clock is optional as this class is exposed to the module API.
clock: Optional[Clock] = None,
):
if clock is None:
clock = Clock(cast(IReactorTime, reactor))
super().__init__(clock, extract_context)
self.canonical_json = canonical_json
def _send_response(
@ -450,8 +468,8 @@ class JsonResource(DirectServeJsonResource):
canonical_json: bool = True,
extract_context: bool = False,
):
super().__init__(canonical_json, extract_context)
self.clock = hs.get_clock()
super().__init__(canonical_json, extract_context, clock=self.clock)
# Map of path regex -> method -> callback.
self._routes: Dict[Pattern[str], Dict[bytes, _PathEntry]] = {}
self.hs = hs
@ -497,7 +515,7 @@ class JsonResource(DirectServeJsonResource):
key word arguments to pass to the callback
"""
# At this point the path must be bytes.
request_path_bytes: bytes = request.path # type: ignore
request_path_bytes: bytes = request.path
request_path = request_path_bytes.decode("ascii")
# Treat HEAD requests as GET requests.
request_method = request.method
@ -564,6 +582,17 @@ class DirectServeHtmlResource(_AsyncResource):
# The error template to use for this resource
ERROR_TEMPLATE = HTML_ERROR_TEMPLATE
def __init__(
self,
extract_context: bool = False,
# Clock is optional as this class is exposed to the module API.
clock: Optional[Clock] = None,
):
if clock is None:
clock = Clock(cast(IReactorTime, reactor))
super().__init__(clock, extract_context)
def _send_response(
self,
request: "SynapseRequest",

View File

@ -796,6 +796,13 @@ def inject_response_headers(response_headers: Headers) -> None:
response_headers.addRawHeader("Synapse-Trace-Id", f"{trace_id:x}")
@ensure_active_span("inject the span into a header dict")
def inject_request_headers(headers: Dict[str, str]) -> None:
span = opentracing.tracer.active_span
assert span is not None
opentracing.tracer.inject(span.context, opentracing.Format.HTTP_HEADERS, headers)
@ensure_active_span(
"get the active span context as a dict", ret=cast(Dict[str, str], {})
)

View File

@ -313,7 +313,7 @@ class MediaRepository:
logger.info("Stored local media in file %r", fname)
if should_quarantine:
logger.warn(
logger.warning(
"Media has been automatically quarantined as it matched existing quarantined media"
)
@ -366,7 +366,7 @@ class MediaRepository:
logger.info("Stored local media in file %r", fname)
if should_quarantine:
logger.warn(
logger.warning(
"Media has been automatically quarantined as it matched existing quarantined media"
)
@ -1393,8 +1393,8 @@ class MediaRepository:
)
logger.info(
"Purging remote media last accessed before"
f" {remote_media_threshold_timestamp_ms}"
"Purging remote media last accessed before %s",
remote_media_threshold_timestamp_ms,
)
await self.delete_old_remote_media(
@ -1409,8 +1409,8 @@ class MediaRepository:
)
logger.info(
"Purging local media last accessed before"
f" {local_media_threshold_timestamp_ms}"
"Purging local media last accessed before %s",
local_media_threshold_timestamp_ms,
)
await self.delete_old_local_media(

View File

@ -287,7 +287,7 @@ class UrlPreviewer:
og["og:image:width"] = dims["width"]
og["og:image:height"] = dims["height"]
else:
logger.warning("Couldn't get dims for %s" % url)
logger.warning("Couldn't get dims for %s", url)
# define our OG response for this media
elif _is_html(media_info.media_type):
@ -609,7 +609,7 @@ class UrlPreviewer:
should_quarantine = await self.store.get_is_hash_quarantined(sha256)
if should_quarantine:
logger.warn(
logger.warning(
"Media has been automatically quarantined as it matched existing quarantined media"
)

View File

@ -118,7 +118,7 @@ class LaterGauge(Collector):
def _register(self) -> None:
if self.name in all_gauges.keys():
logger.warning("%s already registered, reregistering" % (self.name,))
logger.warning("%s already registered, reregistering", self.name)
REGISTRY.unregister(all_gauges.pop(self.name))
REGISTRY.register(self)
@ -244,7 +244,7 @@ class InFlightGauge(Generic[MetricsEntry], Collector):
def _register_with_collector(self) -> None:
if self.name in all_gauges.keys():
logger.warning("%s already registered, reregistering" % (self.name,))
logger.warning("%s already registered, reregistering", self.name)
REGISTRY.unregister(all_gauges.pop(self.name))
REGISTRY.register(self)

View File

@ -104,6 +104,7 @@ from synapse.module_api.callbacks.spamchecker_callbacks import (
CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK,
CHECK_REGISTRATION_FOR_SPAM_CALLBACK,
CHECK_USERNAME_FOR_SPAM_CALLBACK,
FEDERATED_USER_MAY_INVITE_CALLBACK,
SHOULD_DROP_FEDERATED_EVENT_CALLBACK,
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK,
USER_MAY_CREATE_ROOM_CALLBACK,
@ -315,6 +316,7 @@ class ModuleApi:
] = None,
user_may_join_room: Optional[USER_MAY_JOIN_ROOM_CALLBACK] = None,
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
federated_user_may_invite: Optional[FEDERATED_USER_MAY_INVITE_CALLBACK] = None,
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
user_may_create_room_alias: Optional[
@ -338,6 +340,7 @@ class ModuleApi:
should_drop_federated_event=should_drop_federated_event,
user_may_join_room=user_may_join_room,
user_may_invite=user_may_invite,
federated_user_may_invite=federated_user_may_invite,
user_may_send_3pid_invite=user_may_send_3pid_invite,
user_may_create_room=user_may_create_room,
user_may_create_room_alias=user_may_create_room_alias,

View File

@ -105,6 +105,22 @@ USER_MAY_INVITE_CALLBACK = Callable[
]
],
]
FEDERATED_USER_MAY_INVITE_CALLBACK = Callable[
["synapse.events.EventBase"],
Awaitable[
Union[
Literal["NOT_SPAM"],
Codes,
# Highly experimental, not officially part of the spamchecker API, may
# disappear without warning depending on the results of ongoing
# experiments.
# Use this to return additional information as part of an error.
Tuple[Codes, JsonDict],
# Deprecated
bool,
]
],
]
USER_MAY_SEND_3PID_INVITE_CALLBACK = Callable[
[str, str, str, str],
Awaitable[
@ -266,6 +282,7 @@ def load_legacy_spam_checkers(hs: "synapse.server.HomeServer") -> None:
spam_checker_methods = {
"check_event_for_spam",
"user_may_invite",
"federated_user_may_invite",
"user_may_create_room",
"user_may_create_room_alias",
"user_may_publish_room",
@ -347,6 +364,9 @@ class SpamCheckerModuleApiCallbacks:
] = []
self._user_may_join_room_callbacks: List[USER_MAY_JOIN_ROOM_CALLBACK] = []
self._user_may_invite_callbacks: List[USER_MAY_INVITE_CALLBACK] = []
self._federated_user_may_invite_callbacks: List[
FEDERATED_USER_MAY_INVITE_CALLBACK
] = []
self._user_may_send_3pid_invite_callbacks: List[
USER_MAY_SEND_3PID_INVITE_CALLBACK
] = []
@ -377,6 +397,7 @@ class SpamCheckerModuleApiCallbacks:
] = None,
user_may_join_room: Optional[USER_MAY_JOIN_ROOM_CALLBACK] = None,
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
federated_user_may_invite: Optional[FEDERATED_USER_MAY_INVITE_CALLBACK] = None,
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
user_may_create_room_alias: Optional[
@ -406,6 +427,11 @@ class SpamCheckerModuleApiCallbacks:
if user_may_invite is not None:
self._user_may_invite_callbacks.append(user_may_invite)
if federated_user_may_invite is not None:
self._federated_user_may_invite_callbacks.append(
federated_user_may_invite,
)
if user_may_send_3pid_invite is not None:
self._user_may_send_3pid_invite_callbacks.append(
user_may_send_3pid_invite,
@ -605,6 +631,43 @@ class SpamCheckerModuleApiCallbacks:
# No spam-checker has rejected the request, let it pass.
return self.NOT_SPAM
async def federated_user_may_invite(
self, event: "synapse.events.EventBase"
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
"""Checks if a given user may send an invite
Args:
event: The event to be checked
Returns:
NOT_SPAM if the operation is permitted, Codes otherwise.
"""
for callback in self._federated_user_may_invite_callbacks:
with Measure(self.clock, f"{callback.__module__}.{callback.__qualname__}"):
res = await delay_cancellation(callback(event))
# Normalize return values to `Codes` or `"NOT_SPAM"`.
if res is True or res is self.NOT_SPAM:
continue
elif res is False:
return synapse.api.errors.Codes.FORBIDDEN, {}
elif isinstance(res, synapse.api.errors.Codes):
return res, {}
elif (
isinstance(res, tuple)
and len(res) == 2
and isinstance(res[0], synapse.api.errors.Codes)
and isinstance(res[1], dict)
):
return res
else:
logger.warning(
"Module returned invalid value, rejecting invite as spam"
)
return synapse.api.errors.Codes.FORBIDDEN, {}
# Check the standard user_may_invite callback if no module has rejected the invite yet.
return await self.user_may_invite(event.sender, event.state_key, event.room_id)
async def user_may_send_3pid_invite(
self, inviter_userid: str, medium: str, address: str, room_id: str
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:

View File

@ -50,7 +50,7 @@ from synapse.event_auth import auth_types_for_event, get_user_power_level
from synapse.events import EventBase, relation_from_event
from synapse.events.snapshot import EventContext
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.state import POWER_KEY
from synapse.state import CREATE_KEY, POWER_KEY
from synapse.storage.databases.main.roommember import EventIdMembership
from synapse.storage.invite_rule import InviteRule
from synapse.storage.roommember import ProfileInfo
@ -246,6 +246,7 @@ class BulkPushRuleEvaluator:
StateFilter.from_types(event_types)
)
pl_event_id = prev_state_ids.get(POWER_KEY)
create_event_id = prev_state_ids.get(CREATE_KEY)
# fastpath: if there's a power level event, that's all we need, and
# not having a power level event is an extreme edge case
@ -268,6 +269,26 @@ class BulkPushRuleEvaluator:
if auth_event:
auth_events_dict[auth_event_id] = auth_event
auth_events = {(e.type, e.state_key): e for e in auth_events_dict.values()}
if auth_events.get(CREATE_KEY) is None:
# if the event being checked is the create event, use its own permissions
if event.type == EventTypes.Create and event.get_state_key() == "":
auth_events[CREATE_KEY] = event
else:
auth_events[
CREATE_KEY
] = await self.store.get_create_event_for_room(event.room_id)
# if we are evaluating the create event, then use itself to determine power levels.
if event.type == EventTypes.Create and event.get_state_key() == "":
auth_events[CREATE_KEY] = event
else:
# if we aren't processing the create event, create_event_id should always be set
assert create_event_id is not None
create_event = event_id_to_event.get(create_event_id)
if create_event:
auth_events[CREATE_KEY] = create_event
else:
auth_events[CREATE_KEY] = await self.store.get_event(create_event_id)
sender_level = get_user_power_level(event.sender, auth_events)

View File

@ -135,7 +135,7 @@ class Mailer:
self.app_name = app_name
self.email_subjects: EmailSubjectConfig = hs.config.email.email_subjects
logger.info("Created Mailer for app_name %s" % app_name)
logger.info("Created Mailer for app_name %s", app_name)
emails_sent_counter.labels("password_reset")

View File

@ -165,7 +165,7 @@ class ClientRestResource(JsonResource):
# Fail on unknown servlet groups.
if servlet_group not in SERVLET_GROUPS:
if servlet_group == "media":
logger.warn(
logger.warning(
"media.can_load_media_repo needs to be configured for the media servlet to be available"
)
raise RuntimeError(

View File

@ -71,7 +71,7 @@ class QuarantineMediaInRoom(RestServlet):
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester)
logging.info("Quarantining room: %s", room_id)
logger.info("Quarantining room: %s", room_id)
# Quarantine all media in this room
num_quarantined = await self.store.quarantine_media_ids_in_room(
@ -98,7 +98,7 @@ class QuarantineMediaByUser(RestServlet):
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester)
logging.info("Quarantining media by user: %s", user_id)
logger.info("Quarantining media by user: %s", user_id)
# Quarantine all media this user has uploaded
num_quarantined = await self.store.quarantine_media_ids_by_user(
@ -127,7 +127,7 @@ class QuarantineMediaByID(RestServlet):
requester = await self.auth.get_user_by_req(request)
await assert_user_is_admin(self.auth, requester)
logging.info("Quarantining media by ID: %s/%s", server_name, media_id)
logger.info("Quarantining media by ID: %s/%s", server_name, media_id)
# Quarantine this media id
await self.store.quarantine_media_by_id(
@ -155,7 +155,7 @@ class UnquarantineMediaByID(RestServlet):
) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self.auth, request)
logging.info("Remove from quarantine media by ID: %s/%s", server_name, media_id)
logger.info("Remove from quarantine media by ID: %s/%s", server_name, media_id)
# Remove from quarantine this media id
await self.store.quarantine_media_by_id(server_name, media_id, None)
@ -177,7 +177,7 @@ class ProtectMediaByID(RestServlet):
) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self.auth, request)
logging.info("Protecting local media by ID: %s", media_id)
logger.info("Protecting local media by ID: %s", media_id)
# Protect this media id
await self.store.mark_local_media_as_safe(media_id, safe=True)
@ -199,7 +199,7 @@ class UnprotectMediaByID(RestServlet):
) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self.auth, request)
logging.info("Unprotecting local media by ID: %s", media_id)
logger.info("Unprotecting local media by ID: %s", media_id)
# Unprotect this media id
await self.store.mark_local_media_as_safe(media_id, safe=False)
@ -280,7 +280,7 @@ class DeleteMediaByID(RestServlet):
if await self.store.get_local_media(media_id) is None:
raise NotFoundError("Unknown media")
logging.info("Deleting local media by ID: %s", media_id)
logger.info("Deleting local media by ID: %s", media_id)
deleted_media, total = await self.media_repository.delete_local_media_ids(
[media_id]
@ -327,9 +327,11 @@ class DeleteMediaByDateSize(RestServlet):
if server_name is not None and self.server_name != server_name:
raise SynapseError(HTTPStatus.BAD_REQUEST, "Can only delete local media")
logging.info(
"Deleting local media by timestamp: %s, size larger than: %s, keep profile media: %s"
% (before_ts, size_gt, keep_profiles)
logger.info(
"Deleting local media by timestamp: %s, size larger than: %s, keep profile media: %s",
before_ts,
size_gt,
keep_profiles,
)
deleted_media, total = await self.media_repository.delete_old_local_media(

View File

@ -109,6 +109,11 @@ class CapabilitiesRestServlet(RestServlet):
"disallowed"
] = disallowed
if self.config.experimental.msc4267_enabled:
response["capabilities"]["org.matrix.msc4267.forget_forced_upon_leave"] = {
"enabled": self.config.room.forget_on_leave,
}
return HTTPStatus.OK, response

View File

@ -150,6 +150,44 @@ class ReportRoomRestServlet(RestServlet):
return 200, {}
class ReportUserRestServlet(RestServlet):
"""This endpoint lets clients report a user for abuse.
Introduced by MSC4260: https://github.com/matrix-org/matrix-spec-proposals/pull/4260
"""
PATTERNS = list(
client_patterns(
"/users/(?P<target_user_id>[^/]*)/report$",
releases=("v3",),
unstable=False,
v1=False,
)
)
def __init__(self, hs: "HomeServer"):
super().__init__()
self.hs = hs
self.auth = hs.get_auth()
self.clock = hs.get_clock()
self.store = hs.get_datastores().main
self.handler = hs.get_reports_handler()
class PostBody(RequestBodyModel):
reason: StrictStr
async def on_POST(
self, request: SynapseRequest, target_user_id: str
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
body = parse_and_validate_json_object_from_request(request, self.PostBody)
await self.handler.report_user(requester, target_user_id, body.reason)
return 200, {}
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
ReportEventRestServlet(hs).register(http_server)
ReportRoomRestServlet(hs).register(http_server)
ReportUserRestServlet(hs).register(http_server)

View File

@ -64,6 +64,7 @@ from synapse.logging.opentracing import set_tag
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.rest.client._base import client_patterns
from synapse.rest.client.transactions import HttpTransactionCache
from synapse.state import CREATE_KEY, POWER_KEY
from synapse.streams.config import PaginationConfig
from synapse.types import JsonDict, Requester, StreamToken, ThirdPartyInstanceID, UserID
from synapse.types.state import StateFilter
@ -924,15 +925,15 @@ class RoomEventServlet(RestServlet):
if include_unredacted_content and not await self.auth.is_server_admin(
requester
):
power_level_event = (
await self._storage_controllers.state.get_current_state_event(
room_id, EventTypes.PowerLevels, ""
auth_events = await self._storage_controllers.state.get_current_state(
room_id,
StateFilter.from_types(
[
POWER_KEY,
CREATE_KEY,
]
),
)
)
auth_events = {}
if power_level_event:
auth_events[(EventTypes.PowerLevels, "")] = power_level_event
redact_level = event_auth.get_named_level(auth_events, "redact", 50)
user_level = event_auth.get_user_power_level(
@ -1537,6 +1538,7 @@ class RoomHierarchyRestServlet(RestServlet):
super().__init__()
self._auth = hs.get_auth()
self._room_summary_handler = hs.get_room_summary_handler()
self.msc4235_enabled = hs.config.experimental.msc4235_enabled
async def on_GET(
self, request: SynapseRequest, room_id: str
@ -1546,6 +1548,15 @@ class RoomHierarchyRestServlet(RestServlet):
max_depth = parse_integer(request, "max_depth")
limit = parse_integer(request, "limit")
# twisted.web.server.Request.args is incorrectly defined as Optional[Any]
remote_room_hosts = None
if self.msc4235_enabled:
args: Dict[bytes, List[bytes]] = request.args # type: ignore
via_param = parse_strings_from_args(
args, "org.matrix.msc4235.via", required=False
)
remote_room_hosts = tuple(via_param or [])
return 200, await self._room_summary_handler.get_room_hierarchy(
requester,
room_id,
@ -1553,6 +1564,7 @@ class RoomHierarchyRestServlet(RestServlet):
max_depth=max_depth,
limit=limit,
from_token=parse_string(request, "from"),
remote_room_hosts=remote_room_hosts,
)

View File

@ -81,7 +81,7 @@ class ConsentResource(DirectServeHtmlResource):
"""
def __init__(self, hs: "HomeServer"):
super().__init__()
super().__init__(clock=hs.get_clock())
self.hs = hs
self.store = hs.get_datastores().main

View File

@ -44,7 +44,7 @@ class FederationWhitelistResource(DirectServeJsonResource):
PATH = "/_synapse/client/v1/config/federation_whitelist"
def __init__(self, hs: "HomeServer"):
super().__init__()
super().__init__(clock=hs.get_clock())
self._federation_whitelist = hs.config.federation.federation_domain_whitelist

View File

@ -33,7 +33,7 @@ logger = logging.getLogger(__name__)
class JwksResource(DirectServeJsonResource):
def __init__(self, hs: "HomeServer"):
super().__init__(extract_context=True)
super().__init__(clock=hs.get_clock(), extract_context=True)
# Parameters that are allowed to be exposed in the public key.
# This is done manually, because authlib's private to public key conversion

View File

@ -48,7 +48,7 @@ class NewUserConsentResource(DirectServeHtmlResource):
"""
def __init__(self, hs: "HomeServer"):
super().__init__()
super().__init__(clock=hs.get_clock())
self._sso_handler = hs.get_sso_handler()
self._server_name = hs.hostname
self._consent_version = hs.config.consent.user_consent_version

View File

@ -35,7 +35,7 @@ class OIDCBackchannelLogoutResource(DirectServeJsonResource):
isLeaf = 1
def __init__(self, hs: "HomeServer"):
super().__init__()
super().__init__(clock=hs.get_clock())
self._oidc_handler = hs.get_oidc_handler()
async def _async_render_POST(self, request: SynapseRequest) -> None:

View File

@ -35,7 +35,7 @@ class OIDCCallbackResource(DirectServeHtmlResource):
isLeaf = 1
def __init__(self, hs: "HomeServer"):
super().__init__()
super().__init__(clock=hs.get_clock())
self._oidc_handler = hs.get_oidc_handler()
async def _async_render_GET(self, request: SynapseRequest) -> None:

View File

@ -47,7 +47,7 @@ class PasswordResetSubmitTokenResource(DirectServeHtmlResource):
Args:
hs: server
"""
super().__init__()
super().__init__(clock=hs.get_clock())
self.clock = hs.get_clock()
self.store = hs.get_datastores().main

View File

@ -44,7 +44,7 @@ class PickIdpResource(DirectServeHtmlResource):
"""
def __init__(self, hs: "HomeServer"):
super().__init__()
super().__init__(clock=hs.get_clock())
self._sso_handler = hs.get_sso_handler()
self._sso_login_idp_picker_template = (
hs.config.sso.sso_login_idp_picker_template

View File

@ -62,7 +62,7 @@ def pick_username_resource(hs: "HomeServer") -> Resource:
class AvailabilityCheckResource(DirectServeJsonResource):
def __init__(self, hs: "HomeServer"):
super().__init__()
super().__init__(clock=hs.get_clock())
self._sso_handler = hs.get_sso_handler()
async def _async_render_GET(self, request: Request) -> Tuple[int, JsonDict]:
@ -78,7 +78,7 @@ class AvailabilityCheckResource(DirectServeJsonResource):
class AccountDetailsResource(DirectServeHtmlResource):
def __init__(self, hs: "HomeServer"):
super().__init__()
super().__init__(clock=hs.get_clock())
self._sso_handler = hs.get_sso_handler()
def template_search_dirs() -> Generator[str, None, None]:

View File

@ -30,7 +30,7 @@ class MSC4108RendezvousSessionResource(DirectServeJsonResource):
isLeaf = True
def __init__(self, hs: "HomeServer") -> None:
super().__init__()
super().__init__(clock=hs.get_clock())
self._handler = hs.get_rendezvous_handler()
async def _async_render_GET(self, request: SynapseRequest) -> None:

View File

@ -35,7 +35,7 @@ class SAML2ResponseResource(DirectServeHtmlResource):
isLeaf = 1
def __init__(self, hs: "HomeServer"):
super().__init__()
super().__init__(clock=hs.get_clock())
self._saml_handler = hs.get_saml_handler()
self._sso_handler = hs.get_sso_handler()

Some files were not shown because too many files have changed in this diff Show More