8360 Commits

Author SHA1 Message Date
Richard Guo
e0d0529526 Expand virtual generated columns before sublink pull-up
Currently, we expand virtual generated columns after we have pulled up
any SubLinks within the query's quals.  This ensures that the virtual
generated column references within SubLinks that should be transformed
into joins are correctly expanded.  This approach works well and has
posed no issues.

In an upcoming patch, we plan to centralize the collection of catalog
information needed early in the planner.  This will help avoid
repeated table_open/table_close calls for relations in the rangetable.
Since this information is required during sublink pull-up, we are
moving the expansion of virtual generated columns to occur beforehand.

To achieve this, if any EXISTS SubLinks can be pulled up, their
rangetables are processed just before pulling them up.

Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAMbWs4-bFJ1At4btk5wqbezdu8PLtQ3zv-aiaY3ry9Ymm=jgFQ@mail.gmail.com
2025-07-22 11:19:17 +09:00
Alexander Korotkov
cdf1f5a607 Reintroduce test 046_checkpoint_logical_slot
This commit is only for HEAD and v18, where the test has been removed.
It also incorporates improvements below to stability and coverage of the
original test, which were already backpatched to v17.
- Add one pg_logical_emit_message() call to force the creation of a record
  that spawns across two pages.
- Make the logic wait for the checkpoint completion.

Author: Alexander Korotkov <akorotkov@postgresql.org>
Co-authored-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Backpatch-through: 18
2025-07-19 13:59:17 +03:00
Alexander Korotkov
ccd9451593 Improve the stability of the recovery test 047_checkpoint_physical_slot
Currently, the comments in 047_checkpoint_physical_slot. It shows an
incomplete intention to wait for checkpoint completion before performing
an immediate database stop.  However, an immediate node stop can occur both
before and after checkpoint completion.  Both cases should work correctly.
But we would like the test to be more stable and deterministic.  This is why
this commit makes this test explicitly wait for the checkpoint completion
log message.

Discussion: https://postgr.es/m/CAPpHfdurV-j_e0pb%3DUFENAy3tyzxfF%2ByHveNDNQk2gM82WBU5A%40mail.gmail.com
Discussion: https://postgr.es/m/aHXLep3OaX_vRTNQ%40paquier.xyz
Author: Alexander Korotkov <akorotkov@postgresql.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Backpatch-through: 17
2025-07-19 13:51:07 +03:00
Michael Paquier
1e9b5140c4 Check status of nodes after regression test run in 027_stream_regress
This commit improves the recovery TAP test 027_stream_regress so as
regression diffs are printed only if both the primary and the standby
are still alive after the main regression test suite finishes, relying
on d4c9195eff41 to do the job.

Particularly, a crash of the primary could scribble the contents
reported with mostly useless data, as the diffs would refer to query
that failed to run, not necessarily the cause of the crash.

Suggested-by: Andres Freund <andres@anarazel.de>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/CAN55FZ1D6KXvjSs7YGsDeadqCxNF3UUhjRAfforzzP0k-cE=bA@mail.gmail.com
2025-07-19 15:03:14 +09:00
Michael Paquier
d4c9195eff Add PostgreSQL::Test::Cluster::is_alive()
This new routine acts as a wrapper of pg_isready, that can be run on a
node to check its connection status.  This will be used in a recovery
test in a follow-up commit.

Suggested-by: Andres Freund <andres@anarazel.de>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/CAN55FZ1D6KXvjSs7YGsDeadqCxNF3UUhjRAfforzzP0k-cE=bA@mail.gmail.com
2025-07-19 14:38:52 +09:00
Tom Lane
3683af6170 Speed up byteain by not parsing traditional-style input twice.
Instead of laboriously computing the exact output length, use strlen
to get an upper bound cheaply.  (This is still O(N) of course, but
the constant factor is a lot less.)  This will typically result in
overallocating the output datum, but that's of little concern since
it's a short-lived allocation in just about all use-cases.

A simple microbenchmark showed about 40% speedup for long input
strings.

While here, make some cosmetic cleanups and add a test case that
covers the double-backslash code path in byteain and byteaout.

Author: Steven Niu <niushiji@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Stepan Neretin <slpmcf@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/ca315729-140b-426e-81a6-6cd5cfe7ecc5@gmail.com
2025-07-18 16:42:10 -04:00
Dean Rasheed
5022ff250e Fix concurrent update trigger issues with MERGE in a CTE.
If a MERGE inside a CTE attempts an UPDATE or DELETE on a table with
BEFORE ROW triggers, and a concurrent UPDATE or DELETE happens, the
merge code would fail (crashing in the case of an UPDATE action, and
potentially executing the wrong action for a DELETE action).

This is the same issue that 9321c79c86 attempted to fix, except now
for a MERGE inside a CTE. As noted in 9321c79c86, what needs to happen
is for the trigger code to exit early, returning the TM_Result and
TM_FailureData information to the merge code, if a concurrent
modification is detected, rather than attempting to do an EPQ
recheck. The merge code will then do its own rechecking, and rescan
the action list, potentially executing a different action in light of
the concurrent update. In particular, the trigger code must never call
ExecGetUpdateNewTuple() for MERGE, since that is bound to fail because
MERGE has its own per-action projection information.

Commit 9321c79c86 did this using estate->es_plannedstmt->commandType
in the trigger code to detect that a MERGE was being executed, which
is fine for a plain MERGE command, but does not work for a MERGE
inside a CTE. Fix by passing that information to the trigger code as
an additional parameter passed to ExecBRUpdateTriggers() and
ExecBRDeleteTriggers().

Back-patch as far as v17 only, since MERGE cannot appear inside a CTE
prior to that. Additionally, take care to preserve the trigger ABI in
v17 (though not in v18, which is still in beta).

Bug: #18986
Reported-by: Yaroslav Syrytsia <me@ys.lc>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/18986-e7a8aac3d339fa47@postgresql.org
Backpatch-through: 17
2025-07-18 09:55:43 +01:00
Nathan Bossart
b597ae6cc1 Add a test harness for the binary heap code.
binaryheap is heavily used and already has decent test coverage,
but it lacks dedicated tests for its correctness.  This commit
changes that.

Author: Aleksander Alekseev <aleksander@tigerdata.com>
Discussion: https://postgr.es/m/CAJ7c6TMwp%2Bmb8MMoi%3DSMVMso2hYecoVu2Pwf2EOkesq0MiSKxw%40mail.gmail.com
2025-07-17 16:32:10 -05:00
Michael Paquier
74a3fc36f3 Split regression tests for TOAST compression methods into two files
The regression tests for TOAST compression methods are split into two
independent files: one specific to LZ4 and interactions between two
different TOAST compression methods, now called compression_lz4, and a
second one for the "core" cases where only pglz is required.

This saves 300 lines in diffs coming from the alternate output of
compression.sql, required for builds where lz4 is not available.  The
new test is skipped if the build does not support LZ4 compression,
relying on an \if and the values reported in pg_settings for the GUC
default_toast_compression, "lz4" being available only under USE_LZ4.

Another benefit of this split is that this facilitates the addition of
more compression methods for TOAST, which are under discussion.

Note the trick added for the tests of the GUC default_toast_compression,
where VERBOSITY = terse is used to avoid the HINT printing the lists of
values available in the GUC, which are environment-dependent.  This
makes compression.sql independent of the availability of LZ4.

The code coverage of toast_compression.c is slightly improved, increased
from 89% to 91%, with one new case covered in lz4_compress_datum() for
incompressible data.

Author: Nikhil Kumar Veldanda <veldanda.nikhilkumar17@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aDlcU-ym9KfMj9sG@paquier.xyz
2025-07-17 14:08:55 +09:00
Álvaro Herrera
0858f0f96e
Fix dumping of comments on invalid constraints on domains
We skip dumping constraints together with domains if they are invalid
('separate') so that they appear after data -- but their comments were
dumped together with the domain definition, which in effect leads to the
comment being dumped when the constraint does not yet exist.  Delay
them in the same way.

Oversight in 7eca575d1c28; backpatch all the way back.

Author: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxF_C2pe6J_+nPr6C5jf5rQnbYP8XOKr4HM8yHZtp2aQqQ@mail.gmail.com
2025-07-16 19:22:53 +02:00
Tom Lane
3c4e26a62c In username-map substitution, cope with more than one \1.
If the system-name field of a pg_ident.conf line is a regex
containing capturing parentheses, you can write \1 in the
user-name field to represent the captured part of the system
name.  But what happens if you write \1 more than once?
The only reasonable expectation IMO is that each \1 gets
replaced, but presently our code replaces only the first.
Fix that.

Also, improve the tests for this feature to exercise cases
where a non-empty string needs to be substituted for \1.
The previous testing didn't inspire much faith that it
was verifying correct operation of the substitution code.

Given the lack of field complaints about this, I don't
feel a need to back-patch.

Reported-by: David G. Johnston <david.g.johnston@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAKFQuwZu6kZ8ZPvJ3pWXig+6UX4nTVK-hdL_ZS3fSdps=RJQQQ@mail.gmail.com
2025-07-13 13:52:32 -04:00
Tom Lane
64840e4624 Fix inconsistent quoting of role names in ACLs.
getid() and putid(), which parse and deparse role names within ACL
input/output, applied isalnum() to see if a character within a role
name requires quoting.  They did this even for non-ASCII characters,
which is problematic because the results would depend on encoding,
locale, and perhaps even platform.  So it's possible that putid()
could elect not to quote some string that, later in some other
environment, getid() will decide is not a valid identifier, causing
dump/reload or similar failures.

To fix this in a way that won't risk interoperability problems
with unpatched versions, make getid() treat any non-ASCII as a
legitimate identifier character (hence not requiring quotes),
while making putid() treat any non-ASCII as requiring quoting.
We could remove the resulting excess quoting once we feel that
no unpatched servers remain in the wild, but that'll be years.

A lesser problem is that getid() did the wrong thing with an input
consisting of just two double quotes ("").  That has to represent an
empty string, but getid() read it as a single double quote instead.
The case cannot arise in the normal course of events, since we don't
allow empty-string role names.  But let's fix it while we're here.

Although we've not heard field reports of problems with non-ASCII
role names, there's clearly a hazard there, so back-patch to all
supported versions.

Reported-by: Peter Eisentraut <peter@eisentraut.org>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/3792884.1751492172@sss.pgh.pa.us
Backpatch-through: 13
2025-07-11 18:50:13 -04:00
Nathan Bossart
8d33fbacba Add FLUSH_UNLOGGED option to CHECKPOINT command.
This option, which is disabled by default, can be used to request
the checkpoint also flush dirty buffers of unlogged relations.  As
with the MODE option, the server may consolidate the options for
concurrently requested checkpoints.  For example, if one session
uses (FLUSH_UNLOGGED FALSE) and another uses (FLUSH_UNLOGGED TRUE),
the server may perform one checkpoint with FLUSH_UNLOGGED enabled.

Author: Christoph Berg <myon@debian.org>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Discussion: https://postgr.es/m/aDnaKTEf-0dLiEfz%40msg.df7cb.de
2025-07-11 11:51:25 -05:00
Nathan Bossart
2f698d7f4b Add MODE option to CHECKPOINT command.
This option may be set to FAST (the default) to request the
checkpoint be completed as fast as possible, or SPREAD to request
the checkpoint be spread over a longer interval (based on the
checkpoint-related configuration parameters).  Note that the server
may consolidate the options for concurrently requested checkpoints.
For example, if one session requests a "fast" checkpoint and
another requests a "spread" checkpoint, the server may perform one
"fast" checkpoint.

Author: Christoph Berg <myon@debian.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Discussion: https://postgr.es/m/aDnaKTEf-0dLiEfz%40msg.df7cb.de
2025-07-11 11:51:25 -05:00
Nathan Bossart
a4f126516e Add option list to CHECKPOINT command.
This commit adds the boilerplate code for supporting a list of
options in CHECKPOINT commands.  No actual options are supported
yet, but follow-up commits will add support for MODE and
FLUSH_UNLOGGED.  While at it, this commit refactors the code for
executing CHECKPOINT commands to its own function since it's about
to become significantly larger.

Author: Christoph Berg <myon@debian.org>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Discussion: https://postgr.es/m/aDnaKTEf-0dLiEfz%40msg.df7cb.de
2025-07-11 11:51:25 -05:00
Nathan Bossart
bb938e2c3c Rename CHECKPOINT_IMMEDIATE to CHECKPOINT_FAST.
The new name more accurately reflects the effects of this flag on a
requested checkpoint.  Checkpoint-related log messages (i.e., those
controlled by the log_checkpoints configuration parameter) will now
say "fast" instead of "immediate", too.  Likewise, references to
"immediate" checkpoints in the documentation have been updated to
say "fast".  This is preparatory work for a follow-up commit that
will add a MODE option to the CHECKPOINT command.

Author: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/aDnaKTEf-0dLiEfz%40msg.df7cb.de
2025-07-11 11:51:25 -05:00
Tom Lane
f25792c541 Force LC_NUMERIC to C while running TAP tests.
We already forced LC_MESSAGES to C in order to get consistent
message output, but that isn't enough to stabilize messages
that include %f or similar formatting.

I'm a bit surprised that this hasn't come up before.  Perhaps
we ought to back-patch this change, but I'll refrain for now.

Reported-by: Bernd Helmle <mailings@oopsware.de>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6f024eaa7885eddf5e0eb4ba1d095fbc7146519b.camel@oopsware.de
2025-07-11 12:49:07 -04:00
Daniel Gustafsson
a6c0bf9303 Fix sslkeylogfile error handling logging
When sslkeylogfile has been set but the file fails to open in an
otherwise successful connection, the log entry added to the conn
object is never printed.  Instead print the error on stderr for
increased visibility.  This is a debugging tool so using stderr
for logging is appropriate.  Also while there, remove the umask
call in the callback as it's not useful.

Issues noted by Peter Eisentraut in post-commit review, backpatch
down to 18 when support for sslkeylogfile was added

Author: Daniel Gustafsson <daniel@yesql.se>
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/70450bee-cfaa-48ce-8980-fc7efcfebb03@eisentraut.org
Backpatch-through: 18
2025-07-10 23:26:51 +02:00
Michael Paquier
4eca711bc9 injection_points: Add injection_points_list()
This function can be used to retrieve the information about all the
injection points attached to a cluster, providing coverage for
InjectionPointList() introduced in 7b2eb72b1b8c.

The original proposal turned around a system function, but that would
not be backpatchable to stable branches.  It was also a bit weird to
have a system function that fails depending on if the build allows
injection points or not.

Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Rahila Syed <rahilasyed90@gmail.com>
Discussion: https://postgr.es/m/Z_xYkA21KyLEHvWR@paquier.xyz
2025-07-10 10:01:20 +09:00
Nathan Bossart
167ed8082f Introduce pg_dsm_registry_allocations view.
This commit adds a new system view that provides information about
entries in the dynamic shared memory (DSM) registry.  Specifically,
it returns the name, type, and size of each entry.  Note that since
we cannot discover the size of dynamic shared memory areas (DSAs)
and hash tables backed by DSAs (dshashes) without first attaching
to them, the size column is left as NULL for those.

Bumps catversion.

Author: Florents Tselai <florents.tselai@gmail.com>
Reviewed-by: Sungwoo Chang <swchangdev@gmail.com>
Discussion: https://postgr.es/m/4D445D3E-81C5-4135-95BB-D414204A0AB4%40gmail.com
2025-07-09 09:17:56 -05:00
Richard Guo
55a780e947 Consider explicit incremental sort for Append and MergeAppend
For an ordered Append or MergeAppend, we need to inject an explicit
sort into any subpath that is not already well enough ordered.
Currently, only explicit full sorts are considered; incremental sorts
are not yet taken into account.

In this patch, for subpaths of an ordered Append or MergeAppend, we
choose to use explicit incremental sort if it is enabled and there are
presorted keys.

The rationale is based on the assumption that incremental sort is
always faster than full sort when there are presorted keys, a premise
that has been applied in various parts of the code.  In addition, the
current cost model tends to favor incremental sort as being cheaper
than full sort in the presence of presorted keys, making it reasonable
not to consider full sort in such cases.

No backpatch as this could result in plan changes.

Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CAMbWs4_V7a2enTR+T3pOY_YZ-FU8ZsFYym2swOz4jNMqmSgyuw@mail.gmail.com
2025-07-08 10:21:44 +09:00
Álvaro Herrera
3adcf9fbd8
Adapt pg_upgrade test to pg_lsn output format difference
Commit 2633dae2e487 added some zero padding to various LSNs output
routines so that the low word is always 8 hex digits long, for easy
human consumption.  This included the pg_lsn datatype, which breaks the
pg_upgrade test when it compares the pg_dump output of an older version.
Silence this problem by setting the pg_lsn columns to NULL before the
upgrade.

Discussion: https://postgr.es/m/202507071504.xm2r26u7lmzr@alvherre.pgsql
2025-07-07 22:38:40 +02:00
Álvaro Herrera
2633dae2e4
Standardize LSN formatting by zero padding
This commit standardizes the output format for LSNs to ensure consistent
representation across various tools and messages.  Previously, LSNs were
inconsistently printed as `%X/%X` in some contexts, while others used
zero-padding.  This often led to confusion when comparing.

To address this, the LSN format is now uniformly set to `%X/%08X`,
ensuring the lower 32-bit part is always zero-padded to eight
hexadecimal digits.

Author: Japin Li <japinli@hotmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/ME0P300MB0445CA53CA0E4B8C1879AF84B641A@ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
2025-07-07 13:57:43 +02:00
Michael Paquier
5a6c39b6df Disable commit timestamps during bootstrap
Attempting to use commit timestamps during bootstrapping leads to an
assertion failure, that can be reached for example with an initdb -c
that enables track_commit_timestamp.  It makes little sense to register
a commit timestamp for a BootstrapTransactionId, so let's disable the
activation of the module in this case.

This problem has been independently reported once by each author of this
commit.  Each author has proposed basically the same patch, relying on
IsBootstrapProcessingMode() to skip the use of commit_ts during
bootstrap.  The test addition is a suggestion by me, and is applied down
to v16.

Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Author: Andy Fan <zhihuifan1213@163.com>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/OSCPR01MB14966FF9E4C4145F37B937E52F5102@OSCPR01MB14966.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/87plejmnpy.fsf@163.com
Backpatch-through: 13
2025-07-04 15:09:24 +09:00
Tom Lane
931766aaec Simplify COALESCE() with one surviving argument.
If, after removal of useless null-constant arguments, a CoalesceExpr
has exactly one remaining argument, we can just take that argument as
the result, without bothering to wrap a new CoalesceExpr around it.
This isn't likely to produce any great improvement in runtime per se,
but it can lead to better plans since the planner no longer has to
treat the expression as non-strict.

However, there were a few regression test cases that intentionally
wrote COALESCE(x) as a shorthand way of creating a non-strict
subexpression.  To avoid ruining the intent of those tests, write
COALESCE(x,x) instead.  (If anyone ever proposes de-duplicating
COALESCE arguments, we'll need another iteration of this arms race.
But it seems pretty unlikely that such an optimization would be
worthwhile.)

Author: Maksim Milyutin <maksim.milyutin@tantorlabs.ru>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/8e8573c3-1411-448d-877e-53258b7b2be0@tantorlabs.ru
2025-07-03 17:39:53 -04:00
Tom Lane
a10f21e6ce Obtain required table lock during cross-table updates, redux.
Commits 8319e5cb5 et al missed the fact that ATPostAlterTypeCleanup
contains three calls to ATPostAlterTypeParse, and the other two
also need protection against passing a relid that we don't yet
have lock on.  Add similar logic to those code paths, and add
some test cases demonstrating the need for it.

In v18 and master, the test cases demonstrate that there's a
behavioral discrepancy between stored generated columns and virtual
generated columns: we disallow changing the expression of a stored
column if it's used in any composite-type columns, but not that of
a virtual column.  Since the expression isn't actually relevant to
either sort of composite-type usage, this prohibition seems
unnecessary; but changing it is a matter for separate discussion.
For now we are just documenting the existing behavior.

Reported-by: jian he <jian.universality@gmail.com>
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: CACJufxGKJtGNRRSXfwMW9SqVOPEMdP17BJ7DsBf=tNsv9pWU9g@mail.gmail.com
Backpatch-through: 13
2025-07-03 13:46:07 -04:00
Álvaro Herrera
647cffd2f3
Prevent creation of duplicate not-null constraints for domains
This was previously harmless, but now that we create pg_constraint rows
for those, duplicates are not welcome anymore.

Backpatch to 18.

Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CACJufxFSC0mcQ82bSk58sO-WJY4P-o4N6RD2M0D=DD_u_6EzdQ@mail.gmail.com
2025-07-03 11:46:12 +02:00
Álvaro Herrera
87251e1149
Fix bogus grammar for a CREATE CONSTRAINT TRIGGER error
If certain constraint characteristic clauses (NO INHERIT, NOT VALID, NOT
ENFORCED) are given to CREATE CONSTRAINT TRIGGER, the resulting error
message is
  ERROR:  TRIGGER constraints cannot be marked NO INHERIT
which is a bit silly, because these aren't "constraints of type
TRIGGER".  Hardcode a better error message to prevent it.  This is a
cosmetic fix for quite a fringe problem with no known complaints from
users, so no backpatch.

While at it, silently accept ENFORCED if given.

Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: jian he <jian.universality@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CAAJ_b97hd-jMTS7AjgU6TDBCzDx_KyuKxG+K-DtYmOieg+giyQ@mail.gmail.com
Discussion: https://postgr.es/m/CACJufxHSp2puxP=q8ZtUGL1F+heapnzqFBZy5ZNGUjUgwjBqTQ@mail.gmail.com
2025-07-03 11:25:39 +02:00
Fujii Masao
bc2f348e87 Support multi-line headers in COPY FROM command.
The COPY FROM command now accepts a non-negative integer for the HEADER option,
allowing multiple header lines to be skipped. This is useful when the input
contains multi-line headers that should be ignored during data import.

Author: Shinya Kato <shinya11.kato@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/CAOzEurRPxfzbxqeOPF_AGnAUOYf=Wk0we+1LQomPNUNtyZGBZw@mail.gmail.com
2025-07-03 15:27:26 +09:00
Michael Paquier
fd7d7b7191 Improve checks for GUC recovery_target_timeline
Currently check_recovery_target_timeline() converts any value that is
not "current", "latest", or a valid integer to 0.  So, for example, the
following configuration added to postgresql.conf followed by a startup:
recovery_target_timeline = 'bogus'
recovery_target_timeline = '9999999999'

...  results in the following error patterns:
FATAL:  22023: recovery target timeline 0 does not exist
FATAL:  22023: recovery target timeline 1410065407 does not exist

This is confusing, because the server does not reflect the intention of
the user, and just reports incorrect data unrelated to the GUC.

The origin of the problem is that we do not perform a range check in the
GUC value passed-in for recovery_target_timeline.  This commit improves
the situation by using strtou64() and by providing stricter range
checks.  Some test cases are added for the cases of an incorrect, an
upper-bound and a lower-bound timeline value, checking the sanity of the
reports based on the contents of the server logs.

Author: David Steele <david@pgmasters.net>
Discussion: https://postgr.es/m/e5d472c7-e9be-4710-8dc4-ebe721b62cea@pgbackrest.org
2025-07-03 11:14:20 +09:00
Richard Guo
0da29e4cb1 Enable use of Memoize for ANTI joins
Currently, we do not support Memoize for SEMI and ANTI joins because
nested loop SEMI/ANTI joins do not scan the inner relation to
completion, which prevents Memoize from marking the cache entry as
complete.  One might argue that we could mark the cache entry as
complete after fetching the first inner tuple, but that would not be
safe: if the first inner tuple and the current outer tuple do not
satisfy the join clauses, a second inner tuple matching the parameters
would find the cache entry already marked as complete.

However, if the inner side is provably unique, this issue doesn't
arise, since there would be no second matching tuple.  That said, this
doesn't help in the case of SEMI joins, because a SEMI join with a
provably unique inner side would already have been reduced to an inner
join by reduce_unique_semijoins.

Therefore, in this patch, we check whether the inner relation is
provably unique for ANTI joins and enable the use of Memoize in such
cases.

Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com>
Reviewed-by: Andrei Lepikhov <lepihov@gmail.com>
Discussion: https://postgr.es/m/CAMbWs48FdLiMNrmJL-g6mDvoQVt0yNyJAqMkv4e2Pk-5GKCZLA@mail.gmail.com
2025-07-03 10:57:26 +09:00
Nathan Bossart
0c2b7174c3 Fix cross-version upgrade test breakage from commit fe07100e82.
In commit fe07100e82, I renamed a couple of functions in
test_dsm_registry to make it clear what they are testing.  However,
the buildfarm's cross-version upgrade tests run pg_upgrade with the
test modules installed, so this caused errors like:

    ERROR:  could not find function "get_val_in_shmem" in file ".../test_dsm_registry.so"

To fix, revert those renames.  I could probably get away with only
un-renaming the C symbols, but I figured I'd avoid introducing
function name mismatches.  Also, AFAICT the buildfarm's
cross-version upgrade tests do not run the test module tests
post-upgrade, else we'll need to properly version the extension.

Per buildfarm member crake.

Discussion: https://postgr.es/m/aGVuYUNW23tStUYs%40nathan
2025-07-02 13:26:33 -05:00
Nathan Bossart
fe07100e82 Add GetNamedDSA() and GetNamedDSHash().
Presently, the dynamic shared memory (DSM) registry only provides
GetNamedDSMSegment(), which allocates a fixed-size segment.  To use
the DSM registry for more sophisticated things like dynamic shared
memory areas (DSAs) or a hash table backed by a DSA (dshash), users
need to create a DSM segment that stores various handles and LWLock
tranche IDs and to write fairly complicated initialization code.
Furthermore, there is likely little variation in this
initialization code between libraries.

This commit introduces functions that simplify allocating a DSA or
dshash within the DSM registry.  These functions are very similar
to GetNamedDSMSegment().  Notable differences include the lack of
an initialization callback parameter and the prohibition of calling
the functions more than once for a given entry in each backend
(which should be trivially avoidable in most circumstances).  While
at it, this commit bumps the maximum DSM registry entry name length
from 63 bytes to 127 bytes.

Also note that even though one could presumably detach/destroy the
DSAs and dshashes created in the registry, such use-cases are not
yet well-supported, if for no other reason than the associated DSM
registry entries cannot be removed.  Adding such support is left as
a future exercise.

The test_dsm_registry test module contains tests for the new
functions and also serves as a complete usage example.

Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Florents Tselai <florents.tselai@gmail.com>
Reviewed-by: Rahila Syed <rahilasyed90@gmail.com>
Discussion: https://postgr.es/m/aEC8HGy2tRQjZg_8%40nathan
2025-07-02 11:50:52 -05:00
Tom Lane
7374b3a536 Allow width_bucket()'s "operand" input to be NaN.
The array-based variant of width_bucket() has always accepted NaN
inputs, treating them as equal but larger than any non-NaN,
as we do in ordinary comparisons.  But up to now, the four-argument
variants threw errors for a NaN operand.  This is inconsistent
and unnecessary, since we can perfectly well regard NaN as falling
after the last bucket.

We do still throw error for NaN or infinity histogram-bound inputs,
since there's no way to compute sensible bucket boundaries.

Arguably this is a bug fix, but given the lack of field complaints
I'm content to fix it in master.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/2822872.1750540911@sss.pgh.pa.us
2025-07-02 11:34:40 -04:00
Álvaro Herrera
c989affb52
Fix error message for ALTER CONSTRAINT ... NOT VALID
Trying to alter a constraint so that it becomes NOT VALID results in an
error that assumes the constraint is a foreign key.  This is potentially
wrong, so give a more generic error message.

While at it, give CREATE CONSTRAINT TRIGGER a better error message as
well.

Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Co-authored-by: Amul Sul <sulamul@gmail.com>
Discussion: https://postgr.es/m/CACJufxHSp2puxP=q8ZtUGL1F+heapnzqFBZy5ZNGUjUgwjBqTQ@mail.gmail.com
2025-07-02 17:02:27 +02:00
Peter Geoghegan
bd3f59fdb7 Make row compares robust during nbtree array scans.
Recent nbtree bugfix commit 5f4d98d4 added a special case to the code
that sets up a page-level prefix of keys that are definitely satisfied
by every tuple on the page: whenever _bt_set_startikey reached a row
compare key, we'd refuse to apply the pstate.forcenonrequired behavior
in scans where that usually happens (scans with a higher-order array
key).  That hack made the scan avoid essentially the same infinite
cycling behavior that also affected nbtree scans with redundant keys
(keys that preprocessing could not eliminate) prior to commit f09816a0.
There are now serious doubts about this row compare workaround.

Testing has shown that a scan with a row compare key and an array key
could still read the same leaf page twice (without the scan's direction
changing), which isn't supposed to be possible following the SAOP
enhancements added by Postgres 17 commit 5bf748b8.  Also, we still
allowed a required row compare key to be used with forcenonrequired mode
when its header key happened to be beyond the pstate.ikey set by
_bt_set_startikey, which was complicated and brittle.

The underlying problem was that row compares had inconsistent rules
around how scans start (which keys can be used for initial positioning
purposes) and how scans end (which keys can set continuescan=false).
Quals with redundant keys that could not be eliminated by preprocessing
also had that same quality to them prior to today's bugfix f09816a0.  It
now seems prudent to bring row compare keys in line with the new charter
for required keys, by making the start and end rules symmetric.

This commit fixes two points of disagreement between _bt_first and
_bt_check_rowcompare.  Firstly, _bt_check_rowcompare was capable of
ending the scan at the point where it needed to compare an ISNULL-marked
row compare member that came immediately after a required row compare
member.  _bt_first now has symmetric handling for NULL row compares.
Secondly, _bt_first had its own ideas about which keys were safe to use
for initial positioning purposes.  It could use fewer or more keys than
_bt_check_rowcompare.  _bt_first now uses the same requiredness markings
as _bt_check_rowcompare for this.

Now that _bt_first and _bt_check_rowcompare agree on how to start and
end scans, we can get rid of the forcenonrequired special case, without
any risk of infinite cycling.  This approach also makes row compare keys
behave more like regular scalar keys, particularly within _bt_first.

Fixing these inconsistencies necessitates dealing with a related issue
with the way that row compares were marked required by preprocessing: we
didn't mark any lower-order row members required following 2016 bugfix
commit a298a1e0.  That approach was over broad.  The bug in question was
actually an oversight in how _bt_check_rowcompare dealt with tuple NULL
values that failed to satisfy a scan key marked required in the opposite
scan direction (it was a bug in 2011 commits 6980f817 and 882368e8, not
a bug in 2006 commit 3a0a16cb).  Go back to marking row compare members
as required using the original 2006 rules, and fix the 2016 bug in a
more principled way: by limiting use of the "set continuescan=false with
a key required in the opposite scan direction upon encountering a NULL
tuple value" optimization to the first/most significant row member key.
While it isn't safe to use an implied IS NOT NULL qualifier to end the
scan when it comes from a required lower-order row compare member key,
it _is_ generally safe for such a required member key to end the scan --
provided the key is marked required in the _current_ scan direction.

This fixes what was arguably an oversight in either commit 5f4d98d4 or
commit 8a510275.  It is a direct follow-up to today's commit f09816a0.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wz=pcijHL_mA0_TJ5LiTB28QpQ0cGtT-ccFV=KzuunNDDQ@mail.gmail.com
Backpatch-through: 18
2025-07-02 09:48:15 -04:00
Peter Eisentraut
f039c22441 meson: Increase minimum version to 0.57.2
The previous minimum was to maintain support for Python 3.5, but we
now require Python 3.6 anyway (commit 45363fca637), so that reason is
obsolete.  A small raise to Meson 0.57 allows getting rid of a fair
amount of version conditionals and silences some future-deprecated
warnings.

With the version bump, the following deprecation warnings appeared and
are fixed:

WARNING: Project targets '>=0.57' but uses feature deprecated since '0.55.0': ExternalProgram.path. use ExternalProgram.full_path() instead
WARNING: Project targets '>=0.57' but uses feature deprecated since '0.56.0': meson.build_root. use meson.project_build_root() or meson.global_build_root() instead.

It turns out that meson 0.57.0 and 0.57.1 are buggy for our use, so
the minimum is actually set to 0.57.2.  This is specific to this
version series; in the future we won't necessarily need to be this
precise.

Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/42e13eb0-862a-441e-8d84-4f0fd5f6def0%40eisentraut.org
2025-07-02 11:14:53 +02:00
Nathan Bossart
bd09f024a1 Add new OID alias type regdatabase.
This provides a convenient way to look up a database's OID.  For
example, the query

    SELECT * FROM pg_shdepend
    WHERE dbid = (SELECT oid FROM pg_database
                  WHERE datname = current_database());

can now be simplified to

    SELECT * FROM pg_shdepend
    WHERE dbid = current_database()::regdatabase;

Like the regrole type, regdatabase has cluster-wide scope, so we
disallow regdatabase constants from appearing in stored
expressions.

Bumps catversion.

Author: Ian Lawrence Barwick <barwick@gmail.com>
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aBpjJhyHpM2LYcG0%40nathan
2025-06-30 15:38:54 -05:00
Andrew Dunstan
c3e28e9fd9 Avoid uninitialized value error in TAP tests' Cluster->psql
If the method is called in scalar context and we didn't pass in a stderr
handle, one won't be created. However, some error paths assume that it
exists, so in this case create a dummy stderr to avoid the resulting
perl error.

Per gripe from Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru> and
adapted from his patch.

Discussion: https://postgr.es/m/378eac5de4b8ecb5be7bcdf2db9d2c4d@postgrespro.ru
2025-06-30 09:49:50 -04:00
Michael Paquier
5ba00e175a Align log_line_prefix in CI and TAP tests with pg_regress.c
log_line_prefix is changed to include "%b", the backend type in the TAP
test configuration.  %v and %x are removed from the CI configuration,
with the format around %b changed.

The lack of backend type in postgresql.conf set by Cluster.pm for the
TAP test configuration was something that has been bugging me, beginning
the discussion that has led to this change.  The change in the CI has
come up during the discussion, to become consistent with pg_regress.c,
%v and %x not being that useful to have.

Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aC0VaIWAXLgXcHVP@paquier.xyz
2025-06-30 13:56:31 +09:00
Joe Conway
0ebd242555 Run pgperltidy
This is required before the creation of a new branch.  pgindent is
clean, as well as is reformat-dat-files.

perltidy version is v20230309, as documented in pgindent's README.
2025-06-29 21:14:21 -04:00
Tom Lane
66e9df9f6e Fix some new issues with planning of PlaceHolderVars.
In the wake of commit a16ef313f, we need to deal with more cases
involving PlaceHolderVars in NestLoopParams than we did before.

For one thing, a16ef313f was incorrect to suppose that we could
rely on the required-outer relids of the lefthand path to decide
placement of nestloop-parameter PHVs.  As Richard Guo argued at
the time, we must look at the required-outer relids of the join
path itself.

For another, we have to apply replace_nestloop_params() to such
a PHV's expression, in case it contains references to values that
will be supplied from NestLoopParams of higher-level nestloops.

For another, we need to be more careful about the phnullingrels
of the PHV than we were being.  identify_current_nestloop_params
only bothered to ensure that the phnullingrels didn't contain
"too many" relids, but now it has to be exact, because setrefs.c
will apply both NRM_SUBSET and NRM_SUPERSET checks in different
places.  We can compute the correct relids by determining the
set of outer joins that should be able to null the PHV and then
subtracting whatever's been applied at or below this join.
Do the same for plain Vars, too.  (This should make it possible
to use NRM_EQUAL to process nestloop params in setrefs.c, but
I won't risk making such a change in v18 now.)

Lastly, if a nestloop parameter PHV was pulled up out of a subquery
and it contains a subquery that was originally pushed down from this
query level, then that will still be represented as a SubLink, because
SS_process_sublinks won't recurse into outer PHVs, so it didn't get
transformed during expression preprocessing in the subquery.  We can
substitute the version of the PHV's expression appearing in its
PlaceHolderInfo to ensure that that preprocessing has happened.
(Seems like this processing sequence could stand to be redesigned,
but again, late in v18 development is not the time for that.)

It's not very clear to me why the old have_dangerous_phv join-order
restriction prevented us from seeing the last three of these problems.
But given the lack of field complaints, it must have done so.

Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18953-1c9883a9d4afeb30@postgresql.org
2025-06-29 15:04:32 -04:00
Tom Lane
8319e5cb54 Obtain required table lock during cross-table constraint updates.
Sometimes a table's constraint may depend on a column of another
table, so that we have to update the constraint when changing the
referenced column's type.  We need to have lock on the constraint's
table to do that.  ATPostAlterTypeCleanup believed that this case
was only possible for FOREIGN KEY constraints, but it's wrong at
least for CHECK and EXCLUDE constraints; and in general, we'd
probably need exclusive lock to alter any sort of constraint.
So just remove the contype check and acquire lock for any other
table.  This prevents a "you don't have lock" assertion failure,
though no ill effect is observed in production builds.

We'll error out later anyway because we don't presently support
physically altering column types within stored composite columns.
But the catalog-munging is basically all there, so we may as well
make that part work.

Bug: #18970
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Diagnosed-by: jian he <jian.universality@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18970-a7d1cfe1f8d5d8d9@postgresql.org
Backpatch-through: 13
2025-06-29 13:56:03 -04:00
Peter Eisentraut
50fd428b2b Message style improvements 2025-06-28 19:18:06 +02:00
Nathan Bossart
bbccf7ecb3 Use correct DatumGet*() function in test_shm_mq_main().
This is purely cosmetic, as dsm_attach() interprets its argument as
a dsm_handle (i.e., an unsigned integer), but we might as well fix
it.

Oversight in commit 4db3744f1f.

Author: Jianghua Yang <yjhjstz@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAAZLFmRxkUD5jRs0W3K%3DUe4_ZS%2BRcAb0PCE1S0vVJBn3sWH2UQ%40mail.gmail.com
Backpatch-through: 13
2025-06-27 13:37:26 -05:00
Álvaro Herrera
47fb87563b
pg_dump: include comments on valid not-null constraints, too
We were missing collecting comments for not-null constraints that are
dumped inline with the table definition (i.e., valid ones), because they
aren't represented by a separately dumpable object.  Fix by creating
separate TocEntries for the comments.

Co-Authored-By: Jian He <jian.universality@gmail.com>
Co-Authored-By: Álvaro Herrera <alvherre@kurilemu.de>
Reported-By: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-By: Fujii Masao <masao.fujii@oss.nttdata.com>
Discussion: https://postgr.es/m/d50ff977-c728-4e9e-8488-fc2688e08754@oss.nttdata.com
2025-06-26 18:24:12 +02:00
Fujii Masao
81ce602d48 Make CREATE TABLE LIKE copy comments on NOT NULL constraints when requested.
Commit 14e87ffa5c5 introduced support for adding comments to NOT NULL
constraints. However, CREATE TABLE LIKE INCLUDING COMMENTS did not copy
these comments to the new table. This was an oversight in that commit.

This commit corrects the behavior by ensuring CREATE TABLE LIKE to also copy
the comments on NOT NULL constraints when INCLUDING COMMENTS is specified.

Author: Jian He <jian.universality@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/127debef-e558-4784-9e24-0d5eaf91e2d1@oss.nttdata.com
2025-06-26 20:25:34 +09:00
Richard Guo
5069fef1cf Expand virtual generated columns for ALTER COLUMN TYPE
For the subcommand ALTER COLUMN TYPE of the ALTER TABLE command, the
USING expression may reference virtual generated columns.  These
columns must be expanded before the expression is fed through
expression_planner and the expression-execution machinery.  Failing to
do so can result in incorrect rewrite decisions, and can also lead to
"ERROR:  unexpected virtual generated column reference".

Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/b5f96b24-ccac-47fd-9e20-14681b894f36@gmail.com
2025-06-26 12:17:12 +09:00
Peter Eisentraut
0cd69b3d7e Restrict virtual columns to use built-in functions and types
Just like selecting from a view is exploitable (CVE-2024-7348),
selecting from a table with virtual generated columns is exploitable.
Users who are concerned about this can avoid selecting from views, but
telling them to avoid selecting from tables is less practical.

To address this, this changes it so that generation expressions for
virtual generated columns are restricted to using built-in functions
and types, and the columns are restricted to having a built-in type.
We assume that built-in functions and types cannot be exploited for
this purpose.

In the future, this could be expanded by some new mechanism to declare
other functions and types as safe or trusted for this purpose, but
that is to be designed.

(An alternative approach might have been to expand the
restrict_nonsystem_relation_kind GUC to handle this, like the fix for
CVE-2024-7348.  But that is kind of an ugly approach.  That fix had to
fit in the constraints of fixing an ancient vulnerability in all
branches.  Since virtual generated columns are new, we're free from
the constraints of the past, and we can and should use cleaner
options.)

Reported-by: Feike Steenbergen <feikesteenbergen@gmail.com>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAK_s-G2Q7de8Q0qOYUR%3D_CTB5FzzVBm5iZjOp%2BmeVWpMpmfO0w%40mail.gmail.com
2025-06-25 09:56:49 +02:00
Michael Paquier
661643deda Avoid scribbling of VACUUM options
This fixes two issues with the handling of VacuumParams in vacuum_rel().
This code path has the idea to change the passed-in pointer of
VacuumParams for the "truncate" and "index_cleanup" options for the
relation worked on, impacting the two following scenarios where
incorrect options may be used because a VacuumParams pointer is shared
across multiple relations:
- Multiple relations in a single VACUUM command.
- TOAST relations vacuumed with their main relation.

The problem is avoided by providing to the two callers of vacuum_rel()
copies of VacuumParams, before the pointer is updated for the "truncate"
and "index_cleanup" options.

The refactoring of the VACUUM option and parameters done in 0d831389749a
did not introduce an issue, but it has encouraged the problem we are
dealing with in this commit, with b84dbc8eb80b for "truncate" and
a96c41feec6b for "index_cleanup" that have been added a couple of years
after the initial refactoring.  HEAD will be improved with a different
patch that hardens the uses of VacuumParams across the tree.  This
cannot be backpatched as it introduces an ABI breakage.

The backend portion of the patch has been authored by Nathan, while I
have implemented the tests.  The tests rely on injection points to check
the option values, making them faster, more reliable than the tests
originally proposed by Shihao, and they also provide more coverage.
This part can only be backpatched down to v17.

Reported-by: Shihao Zhong <zhong950419@gmail.com>
Author: Nathan Bossart <nathandbossart@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAGRkXqTo+aK=GTy5pSc-9cy8H2F2TJvcrZ-zXEiNJj93np1UUw@mail.gmail.com
Backpatch-through: 13
2025-06-25 10:03:46 +09:00