For UNION, EXCEPT and INTERSECT, we were not very good at estimating the
PathTarget.width for the set operation. Since the targetlist of the set
operation is made up of Vars with varno==0, this would result in
get_expr_width() applying a default estimate based on the Var's type
rather than taking width estimates from any relation's statistics.
Here we attempt to improve the situation by looking at the width estimates
for the set operation child paths and calculating the average width of the
relevant child paths weighted over the estimated number of rows. For
UNION and INTERSECT, the relevant paths to look at are *all* child paths.
For EXCEPT, since we don't return rows from the right-hand child (only
possibly remove left-hand rows matching those), we use only the left-hand
child for width estimates.
This also adjusts the hashed-UNION Path's PathTarget to use the same
PathTarget as its Append subpath. Both PathTargets will be the same and
are void of any resjunk columns, per generate_append_tlist(). Making
the AggPath use the same PathTarget saves having to adjust the "width"
of the AggPath's PathTarget too.
This was reported as a bug by sunw.fnst, but it's not something we ever
claimed to do properly. Plus, if we were to adjust this in back
branches, plans could change as the estimated input sizes to Sorts and
Hash Aggregates could go up or down. Plan choices aren't something we
want to destabilize in stable versions.
Reported-by: sunw.fnst <936739278@qq.com>
Author: David Rowley <drowleyml@gmail.com>
Discussion: https://postgr.es/m/tencent_34CF8017AB81944A4C08DD089D410AB6C306@qq.com
This serves as coverage for the tracking of entry count added by
7bd2975fa92b as built-in variable-sized stats kinds have no need for it,
at least not yet.
A new function, called injection_points_stats_count(), is added to the
module. It is able to return the number of entries. This has been
useful when doing some benchmarking to check the sanity of the counts.
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/aMPKWR81KT5UXvEr@paquier.xyz
Stats kinds can set a new option called "track_entry_count" (disabled by
default, available for variable-numbered stats) that will make pgstats
track the number of entries that exist in its shared hashtable.
As there is only one code path where a new entry is added, and one code
path where entries are freed, the count tracking is straight-forward in
its implementation. Reads of these counters are optimistic, and may
change across two calls. The counter is incremented when an entry is
created (not when reused), and is decremented when an entry is freed
from the hashtable (marked for drop with its refcount reaching 0), which
is something that pgstats decides internally.
A first use case of this facility would be pg_stat_statements, where we
need to be able to cap the number of entries that would be stored in the
shared hashtable, based on its "max" GUC. The module currently relies
on hash_get_num_entries(), which offers a cheap way to count how many
entries are in its hash table, but we cannot do that in pgstats for
variable-sized stats kinds as a single hashtable is used for all the
stats kinds. Independently of PGSS, this is useful for other custom
stats kinds that want to cap, control, or track the number of entries
they have, without depending on a potentially expensive sequential scan
to know the number of entries while holding an extra exclusive lock.
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Keisuke Kuroda <keisuke.kuroda.3862@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/aMPKWR81KT5UXvEr@paquier.xyz
transformPLAssignStmt contained many lines cribbed directly from
transformSelectStmt. I had supposed that we could manage to keep
the two copies in sync, but the bug just fixed in 7504d2be9 shows
that that hope was foolish. Let's refactor so there's just one copy.
The main stumbling block to doing this is that transformPLAssignStmt
has a chunk of custom code that has to run after transformTargetList
but before we potentially modify the tlist further during analysis
of ORDER BY and GROUP BY. Rather than make transformSelectStmt fully
aware of PLAssignStmt processing, I put that code into a callback
function. It still feels a little bit ugly, but it's not too awful,
and surely it's better than a hundred lines of duplicated code.
The steps involved in processing a PLAssignStmt remain exactly
the same as before, just in different places.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/31027.1758919078@sss.pgh.pa.us
Because we failed to do this, DISTINCT in GROUP BY DISTINCT would be
ignored in PL/pgSQL assignment statements. It's not surprising that
no one noticed, since such statements will throw an error if the query
produces more than one row. That eliminates most scenarios where
advanced forms of GROUP BY could be useful, and indeed makes it hard
even to find a simple test case. Nonetheless it's wrong.
This is directly the fault of be45be9c3 which added the groupDistinct
field, but I think much of the blame has to fall on c9d529848, in
which I incautiously supposed that we'd manage to keep two copies of
a big chunk of parse-analysis logic in sync. As a follow-up, I plan
to refactor so that there's only one copy. But that seems useful
only in master, so let's use this one-line fix for the back branches.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/31027.1758919078@sss.pgh.pa.us
Backpatch-through: 14
It had always been intended that this already works correctly as
pg_unreachable() uses __assume(0) on MSVC, and that directs the compiler
in a way so it knows that a given function won't return. However, with
ereport_domain(), it didn't work...
It's now understood that the failure to determine that elog(ERROR) does not
return comes from the inability of the MSVC compiler to detect the "const
int elevel_" is the same as the "elevel" macro parameter. MSVC seems to be
unable to make the "if (elevel_ >= ERROR) branch constantly-true when the
macro is used with any elevel >= ERROR, therefore the pg_unreachable() is
seen to only be present in a *conditional* branch rather than present
*unconditionally*.
While there seems to be no way to force the compiler into knowing that
elevel_ is equal to elevel within the ereport_domain() macro, there is a
way in C11 to determine if the elevel parameter is a compile-time
constant or not. This is done via some hackery using the _Generic()
intrinsic function, which gives us functionality similar to GCC's
__builtin_constant_p(), albeit only for integers.
Here we define pg_builtin_integer_constant_p() for this purpose.
Callers can check for availability via HAVE_PG_BUILTIN_INTEGER_CONSTANT_P.
ereport_domain() has been adjusted to use
pg_builtin_integer_constant_p() instead of __builtin_constant_p().
It's not quite clear at this stage if this now allows us to forego doing
the likes of "return NULL; /* keep compiler quiet */" as there may be other
compilers in use that have similar struggles. It's just a matter of time
before someone commits a function that does not "return" a value after
an elog(ERROR). Let's make time and lack of complaints about said commit
be the judge of if we need to continue the "/* keep compiler quiet */"
palaver.
Author: David Rowley <drowleyml@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/CAApHDvom02B_XNVSkvxznVUyZbjGAR+5myA89ZcbEd3=PA9UcA@mail.gmail.com
This allows these routines to be reused by a future utility heavily
based on vacuumdb.
I made a few relatively minor changes from the original, most notably:
- objfilter was declared as an enum but the values are bit-or'ed, and
individual bits are tested throughout the code. We've discussed this
coding pattern in other contexts and stayed away from it, on the
grounds that the values so generated aren't really true values of the
enum. This commit changes it to be a bits32 with a few #defines for
the flag definitions, the way we do elsewhere. Also, instead of being
a global variable, it's now in the vacuumingOptions struct.
- Two booleans, analyze_only (in vacuumingOptions) and analyze_in_stages
(passed around as a separate boolean argument), are really determining
what "mode" the program runs in -- it's either vacuum, or one of those
two modes. I have three adjectives for them: inconsistent,
unergonomic, unorthodox. Removing these and replacing them with a
mode enum to be kept in vacuumingOptions makes the code structure easier
to understand in a couple of places, and it'll be useful for the new
mode we add next, so do that.
Reviewed-by: Antonin Houska <ah@cybertec.at>
Discussion: https://postgr.es/m/202508301750.cbohxyy2pcce@alvherre.pgsql
Use our established coding pattern to reduce maintenance pain when
adding other per-process-type characteristics.
Like PG_KEYWORD, PG_CMDTAG, PG_RMGR.
To keep the strings translatable, the relevant makefile now also scans
src/include for this specific file. I didn't want to have it scan all
.h files, as then gettext would have to scan all header files. I didn't
find any way to affect the meson behavior in this respect though.
Author: Álvaro Herrera <alvherre@kurilemu.de>
Co-authored-by: Jonathan Gonzalez V. <jonathan.abdiel@gmail.com>
Discussion: https://postgr.es/m/202507151830.dwgz5nmmqtdy@alvherre.pgsql
When running pgbench with --verbose-errors option and a custom script that
triggered retriable errors (e.g., serialization errors) in pipeline mode,
an assertion failure could occur:
Assertion failed: (sql_script[st->use_file].commands[st->command]->type == 1), function commandError, file pgbench.c, line 3062.
The failure happened because pgbench assumed these errors would only occur
during SQL commands, but in pipeline mode they can also happen during
\endpipeline meta command.
This commit fixes the assertion failure by adjusting the assertion check to
allow such errors during either SQL commands or \endpipeline.
Backpatch to v15, where the assertion check was introduced.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAHGQGwGWQMOzNkQs-LmpDHdNC0h8dmAuUMRvZrEntQi5a-b=Kg@mail.gmail.com
This improves the stability of VACUUM when processing btree indexes,
which was previously able to trigger an assertion failure in
_bt_lock_subtree_parent() when an error was previously thrown outside
the scope of _bt_split() when splitting a btree page. VACUUM would
consider the index as in a corrupted state as the right page would not
be zeroed for the error thrown (allocation failure is one pattern).
In a non-assert build, VACUUM is able to succeed, reporting what it sees
as a corruption while attempting to fix the index. This would manifest
as a LOG message, as of:
LOG: failed to re-find parent key in index "idx" for deletion target
page N
CONTEXT: while vacuuming index "idx" of relation "public.tab"
This commit improves the code to rely on two PGAlignedBlocks that are
used as a temporary space for the left and right pages. The main change
concerns the right page, whose contents are now copied into the
"temporary" PGAlignedBlock page while its original space is zeroed. Its
contents are moved from the PGAlignedBlock page back to the page once we
enter in the critical section used for the split. This simplifies the
split logic, as it is not necessary to zero the right page before
throwing an error anymore. Hence errors can now be thrown outside the
split code. For the left page, this shaves one allocation, with
PageGetTempPage() being previously used.
The previous logic originates from commit 8fa30f906b, at a point where
PGAlignedBlock did not exist yet. This could be argued as something
that should be backpatched, but the lack of complaints indicates that it
may not be necessary.
Author: Konstantin Knizhnik <knizhnik@garret.ru>
Discussion: https://postgr.es/m/566dacaf-5751-47e4-abc6-73de17a5d42a@garret.ru
The comment claimed that a TABLESPACE reference was added to the
resulting string, but that's not true. Looks like the comment was
copied from pg_get_indexdef_string() without being adjusted correctly.
Reported-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxHwVPgeu8o9D8oUeDQYEHTAZGt-J5uaJNgYMzkAW7MiCA@mail.gmail.com
... and find_window_run_conditions.
This seems to have been around and unused ever since the Run Condition
feature was added in 9d9c02ccd. Let's remove it to clean things up a
bit.
Author: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/DD26NJ0Y34ZS.2ZOJPHSY12PFI@gmail.com
While most tab completions in match_previous_words() use
COMPLETE_WITH* macros to wrap rl_completion_matches(), some direct
calls to rl_completion_matches() still remained.
This commit introduces COMPLETE_WITH_FILES and COMPLETE_WITH_GENERATOR
macros to replace these direct calls, enhancing both code consistency
and readability.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/20250605100835.b396f9d656df1018f65a4556@sraoss.co.jp
I noticed the surprising behavior that pg_sleep(0.001) will sleep
for 2ms not the expected 1ms. Apparently the float8 calculation of
time-to-sleep is managing to produce something a hair over 1, which
ceil() rounds up to 2, and then WaitLatch() faithfully waits 2ms.
It could be that this works as-expected for some ranges of current
timestamp but not others, which would account for not having seen
it before. In any case, let's try to avoid it by removing the
float arithmetic in the delay calculation. We're stuck with the
declared input type being float8, but we can convert that to integer
microseconds right away, and then work strictly with integral values.
There might still be roundoff surprises for certain input values,
but at least the behavior won't be time-varying.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/3879137.1758825752@sss.pgh.pa.us
The functions test_stat_func() and test_stat_func2() had empty
function bodies, so that they took very little time to run. This made
it possible that on machines with relatively low timer resolution the
functions could return before the clock advanced, making the test fail
(as seen on buildfarm members fruitcrow and hamerkop).
To avoid that, pg_sleep for 10us during the functions. As far as we
can tell, all current hardware has clock resolution much less than
that. (The current implementation of pg_sleep will round it up to
1ms anyway, but someday that might get improved.)
Author: Michael Banck <mbanck@gmx.net>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/68d413a3.a70a0220.24c74c.8be9@mx.google.com
Backpatch-through: 15
If we already have an extension_state array but see a new extension_id
much larger than the highest the extension_id we've previously seen,
the old code might have failed to expand the array to a large enough
size, leading to disaster. Also, if we don't have an extension array
at all and need to create one, we should make sure that it's big enough
that we don't have to resize it instantly.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: http://postgr.es/m/2949591.1758570711@sss.pgh.pa.us
Backpatch-through: 18
These were omitted from build dependencies and also tab/nbsp
checks, with the result that "make" did nothing after modifying
a func/*.sgml file.
Oversight in 4e23c9ef6. AFAICT we don't need any comparable
changes in meson.build, or at least I don't see it doing anything
special for the pre-existing ref/*.sgml files.
When defining an injection point there is no need to wrap the definition
with USE_INJECTION_POINT guards, the INJECTION_POINT macro is available
in all builds. Remove to make the code consistent.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/OSCPR01MB14966C8015DEB05ABEF2CE077F51FA@OSCPR01MB14966.jpnprd01.prod.outlook.com
Backpatch-through: 17
Commit 4464fddf removed the large insertions but missed to remove
all the comments referring to them. Also remove a superfluous ')'
in another comment.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/OSCPR01MB149663A99DAF2826BE691C23DF51FA@OSCPR01MB14966.jpnprd01.prod.outlook.com
... which silently propagates a lot of headers into many places
via pgstat.h, as evidenced by the variety of headers that this patch
needs to add to seemingly random places. Add a minimum of typedefs to
conflict.h to be able to remove execnodes.h, and fix the fallout.
Backpatch to 18, where conflict.h first appeared.
Discussion: https://postgr.es/m/202509191927.uj2ijwmho7nv@alvherre.pgsql
This commit updates the pgbench documentation to list \gset and \aset
as separate terms for easier reading. It also clarifies that \gset raises
an error if the query returns zero or multiple rows, and explains how to
detect cases where the query with \aset returned no rows.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/20250626180125.5b896902a3d0bcd93f86c240@sraoss.co.jp
Previously, vacuumdb --analyze-only issued VACUUM (ONLY_DATABASE_STATS)
at the end. Since --analyze-only is meant to update optimizer statistics only,
this extra VACUUM command is unnecessary.
This commit prevents vacuumdb --analyze-only from running that redundant
VACUUM command.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Mircea Cadariu <cadariu.mircea@gmail.com>
Discussion: https://postgr.es/m/CAHGQGwEqHGa-k=wbRMucUVihHVXk4NQkK94GNN=ym9cQ5HBSHg@mail.gmail.com
f83d709760d8 incorrectly refers to a XLOG_HEAP2_PRUNE_FREEZE WAL record
opcode. No such code exists. The relevant opcodes are
XLOG_HEAP2_PRUNE_ON_ACCESS, XLOG_HEAP2_PRUNE_VACUUM_SCAN, and
XLOG_HEAP2_PRUNE_VACUUM_CLEANUP. Correct it.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/yn4zp35kkdsjx6wf47zcfmxgexxt4h2og47pvnw2x5ifyrs3qc%407uw6jyyxuyf7
Without this, rebuilds can malfunction unless --enable-depend is used.
Historically we've expected that you can get away without
--enable-depend as long as you manually clean after changing *.h
files; the makefiles are supposed to handle other sorts of
dependencies. So add this one.
Follow-on to 635998965, so no need for back-patch.
Discussion: https://postgr.es/m/3121329.1758650878@sss.pgh.pa.us
We were already doing a short (1-second) pg_test_timing run during
check-world and buildfarm runs. But we weren't doing anything
with the result except for a basic regex-based sanity check.
Collecting that output from buildfarm runs is seeming very
attractive though, because it would help us determine what sort
of timing resolution is available on supported platforms.
It's not very long, so let's just note it verbatim in the TAP log.
Discussion: https://postgr.es/m/3321785.1758728271@sss.pgh.pa.us
This commit corrects several issues in function comments:
* The parameter "rel" was incorrectly referred to as "relation" in the comments
for table_tuple_delete(), table_tuple_update(), and table_tuple_lock().
* In table_tuple_delete(), "changingPart" was listed as an output parameter
in the comments but is actually input.
* In table_tuple_update(), "slot" was listed as an input parameter
in the comments but is actually output.
* The comment for "update_indexes" in table_tuple_update() was mis-indented.
* The comments for heap_lock_tuple() incorrectly referenced a non-existent
"tid" parameter.
Author: Chao Li <lic@highgo.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAEoWx2nB6Ay8g=KEn7L3qbYX_4+sLk9XOMkV0XZqHR4cTY8ZvQ@mail.gmail.com
Format validation and element extraction for intermediate line
strings were inconsistent in their handling of tab delimiters,
which resulted in an unclear error when multiple tab characters
were used as a delimiter. This fixes it by using captures from
the validation regex instead of a separate split() to avoid the
inconsistency. Also, it ensures that \t+ is used consistently
when inspecting the strings.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/20250729.135638.1148639539103758555.horikyota.ntt@gmail.com
Mappings for 18 characters have changed, affecting 36 code points. This
is a break in compatibility, but these characters are rarely used.
U+E5E5 (Private Use Area) was previously mapped to \xA3A0. This code
point now maps to \x65356535. Attempting to convert \xA3A0 will now
raise an error.
Separate from the 2022 update, the following mappings were previously
swapped, and subsequently corrected in 2000 and later versions:
* U+E7C7 (Private Use Area) now maps to \x8135F437
* U+1E3F (Latin Small Letter M with Acute) now maps to \xA8BC
The 2022 standard mentions the following policy changes, but they
have no effect in our implementation:
66 new ideographs are now required, but these are mapped
algorithmically so were already handled by utf8_and_gb18030.c.
Nine CJK compatibility ideographs are no longer required, but
implementations may retain them, as does the source we use from
the Unicode Consortium.
Release notes: Compatibility section
For further details, see:
https://www.unicode.org/L2/L2022/22274-disruptive-changes.pdfhttps://ken-lunde.medium.com/the-gb-18030-2022-standard-3d0ebaeb4132
Author: Chao Li <lic@highgo.com>
Author: Zheng Tao <taoz@highgo.com>
Discussion: https://postgr.es/m/966d9fc.169.198741fe60b.Coremail.jiaoshuntian%40highgo.com
Previously, the parallel apply worker used SIGINT to receive a graceful
shutdown signal from the leader apply worker. However, SIGINT is also used
by the LOCK_TIMEOUT handler to trigger a query-cancel interrupt. This
overlap caused the parallel apply worker to miss LOCK_TIMEOUT signals,
leading to incorrect behavior during lock wait/contention.
This patch resolves the conflict by switching the graceful shutdown signal
from SIGINT to SIGUSR2.
Reported-by: Zane Duffield <duffieldzane@gmail.com>
Diagnosed-by: Zhijie Hou <houzj.fnst@fujitsu.com>
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 16, where it was introduced
Discussion: https://postgr.es/m/CACMiCkXyC4au74kvE2g6Y=mCEF8X6r-Ne_ty4r7qWkUjRE4+oQ@mail.gmail.com
The macros doing conversions of/from "text" from/to Bitmapset were using
arbitrary casts with Datum, something that is not fine since
2a600a93c7be.
These macros do not actually need casts with Datum, as they are given
already "text" and Bitmapset data in input. They are updated to use
cstring_to_text() and text_to_cstring(), fixing the compiler warnings
reported by the buildfarm. Note that appending a -m32 to gcc to trigger
32-bit builds was enough to reproduce the warnings here.
While on it, outer parenthesis are added to TEXT_TO_BITMAPSET(), and
inner parenthesis are removed from BITMAPSET_TO_TEXT(), to make these
macros more consistent with the style used in the tree, based on
suggestions by Tom Lane.
Oversights in commit 00c3d87a5cab.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Greg Burd <greg@burd.me>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/3027069.1758606227@sss.pgh.pa.us
Result nodes now include an RTI set, which is only non-NULL when they
have no subplan, and is taken from the relid set of the RelOptInfo that
the Result is generating. ExplainPreScanNode now takes notice of these
RTIs, which means that a few things get schema-qualified in the
regression tests that previously did not. This makes the output more
consistent between cases where some part of the plan tree is replaced by
a Result node and those where this does not happen.
Likewise, pg_overexplain's EXPLAIN (RANGE_TABLE) now displays the RTIs
stored in a Result node just as it already does for other RTI-bearing
node types.
Result nodes also now include a result_reason, which tells us something
about why the Result node was inserted. Using that information, EXPLAIN
now emits, where relevant, a "Replaces" line describing the origin of
a Result node.
The purpose of these changes is to allow code that inspects a Plan
tree to understand the origin of Result nodes that appear therein.
Discussion: http://postgr.es/m/CA+TgmoYeUZePZWLsSO+1FAN7UPePT_RMEZBKkqYBJVCF1s60=w@mail.gmail.com
Reviewed-by: Alexandra Wang <alexandra.wang.oss@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Remove stray whitespace in xref tag.
This was found due to a regression in xmllint 2.15.0 which flagged
this as an error, and at the time of this commit no fix for xmllint
has shipped.
Author: Erik Wienhold <ewie@ewie.name>
Discussion: https://postgr.es/m/f4c4661b-4e60-4c10-9336-768b7b55c084@ewie.name
Backpatch-through: 17
Bitmapset has a complex set of APIs, defined in bitmapset.h, and it can
be hard to test edge cases with the backend core code only.
This test module is aimed at closing the gap, and implements a set of
SQL functions that act as wrappers of the low-level C functions of the
same names. These functions rely on text as data type for the input and
the output as Bitmapset as a node has support for these. An extra
function, named test_random_operations(), can be used to stress bitmaps
with random member values and a defined number of operations potentially
useful for other purposes than only tests.
The coverage increases from 85.2% to 93.4%. It should be possible to
cover more code paths, but at least it's a beginning.
Author: Greg Burd <greg@burd.me>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/7BD1ABDB-B03A-464A-9BA9-A73B55AD8A1F@getmailspring.com
The package for the UUID library may be named "uuid" or "ossp-uuid", and
meson.build has been using a single call of dependency() with multiple
names, something only supported since meson 0.60.0.
The minimum version of meson supported by Postgres is 0.57.2 on HEAD,
since f039c2244110, and 0.54 on stable branches down to 16.
Author: Oreo Yang <oreo.yang@hotmail.com>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/OS3P301MB01656E6F91539770682B1E77E711A@OS3P301MB0165.JPNP301.PROD.OUTLOOK.COM
Backpatch-through: 16
This adds support for base64url encoding and decoding, a base64
variant which is safe to use in filenames and URLs. base64url
replaces '+' in the base64 alphabet with '-' and '/' with '_',
thus making it safe for URL addresses and file systems.
Support for base64url was originally suggested by Przemysław Sztoch.
Author: Florents Tselai <florents.tselai@gmail.com>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: David E. Wheeler <david@justatheory.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Chao Li (Evan) <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/70f2b6a8-486a-4fdb-a951-84cef35e22ab@sztoch.pl
The lossy-counting algorithm that ANALYZE uses to identify most-common
array elements has a notion of cutoff frequency: elements with
frequency greater than that are guaranteed to be collected, elements
with smaller frequencies are not. In cases where we find fewer MCEs
than the stats target would permit us to store, the cutoff frequency
provides valuable additional information, to wit that there are no
non-MCEs with frequency greater than that. What the selectivity
estimation functions actually use the "minfreq" entry for is as a
ceiling on the possible frequency of non-MCEs, so using the cutoff
rather than the lowest stored MCE frequency provides a tighter bound
and more accurate estimates.
Therefore, instead of redundantly storing the minimum observed MCE
frequency, store the cutoff frequency when there are fewer tracked
values than we want. (When there are more, then of course we cannot
assert that no non-stored elements are above the cutoff frequency,
since we're throwing away some that are; so we still use the
minimum stored frequency in that case.)
Notably, this works even when none of the values are common enough
to be called MCEs. In such cases we previously stored nothing in
the STATISTIC_KIND_MCELEM pg_statistic slot, which resulted in the
selectivity functions falling back to default estimates. So in that
case we want to construct a STATISTIC_KIND_MCELEM entry that contains
no "values" but does have "numbers", to wit the three extra numbers
that the MCELEM entry type defines. A small obstacle is that
update_attstats() has traditionally stored a null, not an empty array,
when passed zero "values" for a slot. That gives rise to an MCELEM
entry that get_attstatsslot() will spit up on. The least risky
solution seems to be to adjust update_attstats() so that it will emit
a non-null (but possibly empty) array when the passed stavalues array
pointer isn't NULL, rather than conditioning that on numvalues > 0.
In other existing cases I don't believe that that changes anything.
For consistency, handle the stanumbers array the same way.
In passing, improve the comments in routines that use
STATISTIC_KIND_MCELEM data. Particularly, explain why we use
minfreq / 2 not minfreq as the estimate for non-MCE values.
Thanks to Matt Long for the suggestion that we could apply this
idea even when there are more than zero MCEs.
Reported-by: Mark Frost <FROSTMAR@uk.ibm.com>
Reported-by: Matt Long <matt@mattlong.org>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/PH3PPF1C905D6E6F24A5C1A1A1D8345B593E16FA@PH3PPF1C905D6E6.namprd15.prod.outlook.com
Commit a391ff3c3, which added the ability for a function's support
function to provide a custom selectivity estimate for "WHERE f(...)",
unintentionally removed the possibility of applying expression
statistics after finding there's no applicable support function.
That happened because we no longer fell through to boolvarsel()
as before. Refactor to do so again, putting the 0.3333333 default
back into boolvarsel() where it had been (cf. commit 39df0f150).
I surely wouldn't have made this error if 39df0f150 had included
a test case, so add one now. At the time we did not have the
"extended statistics" infrastructure, but we do now, and it is
also unable to work in this scenario because of this error.
So make use of that for the test case.
This is very clearly a bug fix, but I'm afraid to put it into
released branches because of the likelihood of altering plan
choices, which we avoid doing in minor releases. So, master only.
Reported-by: Frédéric Yhuel <frederic.yhuel@dalibo.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/a8b99dce-1bfb-4d97-af73-54a32b85c916@dalibo.com
Initially this was to fix the "catched" typo, but I (David) wasn't quite
clear on what the previous comment meant about being "effective". I
expect this means efficiency, so I've reworded the comment to indicate
that.
While this is only a comment fixup, for the sake of possibly minimizing
possible future backpatching pain, I've opted to backpatch to 18 since
this code is new to that version and the release isn't out the door yet.
Author: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/CAHewXNmSYWPud1sfBvpKbCJeRkWeZYuqatxtV9U9LvAFXBEiBw@mail.gmail.com
Backpatch-through: 18
Commit 216a784829c introduced parallel apply workers, allowing multiple
processes to share a replication origin. To support this,
replorigin_session_setup() was extended to accept a pid argument
identifying the process using the origin.
This commit exposes that capability through the SQL interface function
pg_replication_origin_session_setup() by adding an optional pid parameter.
This enables multiple processes to coordinate replication using the same
origin when using SQL-level replication functions.
This change allows the non-builtin logical replication solutions to
implement parallel apply for large transactions.
Additionally, an existing internal error was made user-facing, as it can
now be triggered via the exposed SQL API.
Author: Doruk Yilmaz <doruk@mixrank.com>
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Discussion: https://postgr.es/m/CAMPB6wfe4zLjJL8jiZV5kjjpwBM2=rTRme0UCL7Ra4L8MTVdOg@mail.gmail.com
Discussion: https://postgr.es/m/CAE2gYzyTSNvHY1+iWUwykaLETSuAZsCWyryokjP6rG46ZvRgQA@mail.gmail.com
When deciding which code path to use depending on the state of recovery,
XLogFlush() and XLogNeedsFlush() have been relying on different
criterias:
- XLogFlush() relied on XLogInsertAllowed().
- XLogNeedsFlush() relied on RecoveryInProgress().
Currently, the checkpointer is allowed to insert WAL records while
RecoveryInProgress() returns true for an end-of-recovery checkpoint,
where XLogInsertAllowed() matters. Using RecoveryInProgress() in
XLogNeedsFlush() did not really matter for its existing callers, as the
checkpointer only called XLogFlush(). However, a feature under
discussion, by Melanie Plageman, needs XLogNeedsFlush() to be able to
work in more contexts, the end-of-recovery checkpoint being one.
This commit changes XLogNeedsFlush() to use XLogInsertAllowed() instead
of RecoveryInProgress(), making the checks in both routines more
consistent. While on it, an assertion based on XLogNeedsFlush() is
added at the end of XLogFlush(), triggered when flushing a physical
position (not for the normal recovery patch that checks for updates of
the minimum recovery point). This assertion would fail for example in
the recovery test 015_promotion_pages if XLogNeedsFlush() is changed to
use RecoveryInProgress(). This should be hopefully enough to ensure
that the checks done in both routines remain consistent.
Author: Melanie Plageman <melanieplageman@gmail.com>
Co-authored-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAAKRu_a1vZRZRWO3_jv_X13RYoqLRVipGO0237g5PKzPa2YX6g@mail.gmail.com
Commit bb3ec16e14 moved partition pruning metadata into PlannedStmt.
At executor startup this metadata is used to initialize the EState
fields es_part_prune_infos, es_part_prune_states, and
es_part_prune_results. EvalPlanQualStart() failed to copy those
fields into the child EState, causing NULL dereference when Append
ran partition pruning during a recheck. This can occur with DELETE
or UPDATE on partitioned tables that use runtime pruning, e.g. with
generic plans.
Fix by copying all partition pruning state into the EPQ estate.
Add an isolation test that reproduces the crash with concurrent
UPDATE and DELETE on a partitioned table, where the DELETE session
hits the crash during its EPQ recheck after the UPDATE commits.
Bug: #19056
Reported-by: Fei Changhong <feichanghong@qq.com>
Diagnozed-by: Fei Changhong <feichanghong@qq.com>
Author: David Rowley <dgrowleyml@gmail.com>
Co-authored-by: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/19056-a677cef9b54d76a0%40postgresql.org
This change is a tighter rework of 7d85d87f4d5c, which tried to improve
the code so as it would work should PgStat_HashKey gain new fields that
create padding bytes. However, the previous change is proving to not be
enough as some code paths of pgstats do not pass PgStat_HashKey by
reference (valgrind would warn when padding is added to the structure,
through a new field).
Per discussion, let's document and check that PgStat_HashKey has no
padding rather than try to complicate the code of pgstats so as it is
able to work around that.
This removes a couple of memset(0) calls that should not be required.
While on it, this commit adds a static assertion checking that no
padding is introduced in the structure, by checking that the size of
PgStat_HashKey matches with the sum of the size of all its fields.
The object ID part of the hash key is already 8 bytes, which should be
plenty enough already. A comment is added to discourage the addition of
new fields.
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0t9omat+HVSakJXwTMWvhpYFcAZb41RPWKwrKFUgmAFBQ@mail.gmail.com
When shared memory is re-initialized after a crash, the named
LWLock tranche request array that was copied to shared memory will
no longer be accessible. To fix, save the pointer to the original
array in postmaster's local memory, and switch to it when
re-initializing the LWLock-related shared memory.
Oversight in commit ed1aad15e0. Per buildfarm member batta.
Reported-by: Michael Paquier <michael@paquier.xyz>
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aMoejB3iTWy1SxfF%40paquier.xyz
Discussion: https://postgr.es/m/f8ca018f-3479-49f6-a92c-e31db9f849d7%40gmail.com
Previously, pg_restore did not skip security labels on publications or
subscriptions even when --no-publications or --no-subscriptions was specified.
As a result, it could issue SECURITY LABEL commands for objects that were
never created, causing those commands to fail.
This commit fixes the issue by ensuring that security labels on publications
and subscriptions are also skipped when the corresponding options are used.
Backpatch to all supported versions.
Author: Jian He <jian.universality@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CACJufxHCt00pR9h51AVu6+yPD5J7JQn=7dQXxqacj0XyDhc-fA@mail.gmail.com
Backpatch-through: 13
StrategyInitialize() calls InitBufTable() with maximum number of entries that
the buffer lookup table can ever have. Thus there should not be any need to
allocate more element after initialization. Hence mark the hash table as fixed
sized.
Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/CAExHW5v0jh3F_wj86yC=qBfWk0uiT94qy=Z41uzAHLHh0SerRA@mail.gmail.com
If an aggregate function call contains a sub-select that has
an RTE referencing a CTE outside the aggregate, we must treat
that reference like a Var referencing the CTE's query level
for purposes of determining the aggregate's level. Otherwise
we might reach the nonsensical conclusion that the aggregate
should be evaluated at some query level higher than the CTE,
ending in a planner error or a broken plan tree that causes
executor failures.
Bug: #19055
Reported-by: BugForge <dllggyx@outlook.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19055-6970cfa8556a394d@postgresql.org
Backpatch-through: 13
Commit 2a600a93 made Datum 8 bytes wide everywhere. It was no longer
appropriate to use TypeSizeT on 32 bit systems, and JIT compilation
would fail with various type check errors. Introduce a separate
LLVMTypeRef with the name TypeDatum. TypeSizeT is still used in some
places for actual size_t values.
Reported-by: Dmitry Mityugov <d.mityugov@postgrespro.ru>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Tested-by: Dmitry Mityugov <d.mityugov@postgrespro.ru>
Discussion: https://postgr.es/m/0a9f0be59171c2e8f1b3bc10f4fcf267%40postgrespro.ru
The pending entry was not used when incrementing its data, directly
manipulating the shared memory pointer, without even locking it. This
could mean losing statistics under concurrent activity. The flush
callback was a no-op.
This code serves as a base template for extensions for the custom
cumulative statistics, so let's be clean and use a pending entry for the
incrementations, whose data is then flushed to the corresponding entry
in the shared hashtable when all the stats are reported, in its own
flush callback.
Author: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0v0U0yhPbY+bqChomkPbyUrRQ3rQXnZf_SB-svDiQOpgQ@mail.gmail.com
Backpatch-through: 18
The shared memory size was calculated based on an offset of io_handles,
which is itself a pointer included in the structure. We tend to
overestimate the shared memory size overall, so this was unlikely an
issue in practice, but let's be correct and use the full size of the
structure in the calculation, so as the pointer for io_handles is
included.
Oversight in da7226993fd4.
Author: Madhukar Prasad <madhukarprasad@google.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAKi+wrbC2dTzh_vKJoAZXV5wqTbhY0n4wRNpCjJ=e36aoo0kFw@mail.gmail.com
Backpatch-through: 18
The EvalPlanQual recheck for TID Range Scan wasn't rechecking the TID qual
still passed after following update chains. This could result in tuples
being updated or deleted by plans using TID Range Scans where the ctid of
the new (updated) tuple no longer matches the clause of the scan. This
isn't desired behavior, and isn't consistent with what would happen if the
chosen plan had used an Index or Seq Scan, and that could lead to hard to
predict behavior for scans that contain TID quals and other quals as the
planner has freedom to choose TID Range or some other non-TID scan method
for such queries, and the chosen plan could change at any moment.
Here we fix this by properly implementing the recheck function for TID
Range Scans.
Backpatch to 14, where TID Range Scans were added
Reported-by: Sophie Alpert <pg@sophiebits.com>
Author: Sophie Alpert <pg@sophiebits.com>
Author: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/4a6268ff-3340-453a-9bf5-c98d51a6f729@app.fastmail.com
Backpatch-through: 14
The EvalPlanQual recheck for TID Scan wasn't rechecking the TID qual
still passed after following update chains. This could result in tuples
being updated or deleted by plans using TID Scans where the ctid of the
new (updated) tuple no longer matches the clause of the scan. This isn't
desired behavior, and isn't consistent with what would happen if the
chosen plan had used an Index or Seq Scan, and that could lead to hard to
predict behavior for scans that contain TID quals and other quals as the
planner has freedom to choose TID or some other scan method for such
queries, and the chosen plan could change at any moment.
Here we fix this by properly implementing the recheck function for TID
Scans.
Backpatch to 13, oldest supported version
Reported-by: Sophie Alpert <pg@sophiebits.com>
Author: Sophie Alpert <pg@sophiebits.com>
Author: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/4a6268ff-3340-453a-9bf5-c98d51a6f729@app.fastmail.com
Backpatch-through: 13
This reverts commit 98fc31d6499163a0a781aa6f13582a07f09cd7c6.
That change allowed DROP OWNED BY to drop grants of the target
role to other roles, arguing that nobody would need those
privileges anymore. But that's not so: if you're not superuser,
you still need admin privilege on the target role so you can
drop it.
It's not clear whether or how the dependency-based approach
to solving the original problem can be adapted to keep these
grants. Since v18 release is fast approaching, the sanest
thing to do seems to be to revert this patch for now. The
race-condition problem is low severity and not worth taking
risks for.
I didn't force a catversion bump in 98fc31d64, so I won't do
so here either.
Reported-by: Dipesh Dhameliya <dipeshdhameliya125@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CABgZEgczOFicCJoqtrH9gbYMe_BV3Hq8zzCBRcMgmU6LRsihUA@mail.gmail.com
Backpatch-through: 18
The COMMENT should depend on the separately-dumped constraint, not the
domain. Sufficient restore parallelism might fail the COMMENT command
by issuing it before the constraint exists. Back-patch to v13, like
commit 0858f0f96ebb891c8960994f023ed5a17b758a38.
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/20250913020233.fa.nmisch@google.com
Backpatch-through: 13
Up to now we've contented ourselves with a one-size-fits-all error
hint when we fail to find any match to a function or procedure call.
That was mostly okay in the beginning, but it was never great, and
since the introduction of named arguments it's really not adequate.
We at least ought to distinguish "function name doesn't exist" from
"function name exists, but not with those argument names". And the
rules for named-argument matching are arcane enough that some more
detail seems warranted if we match the argument names but the call
still doesn't work.
This patch creates a framework for dealing with these problems:
FuncnameGetCandidates and related code will now pass back a bitmask of
flags showing how far the match succeeded. This allows a considerable
amount of granularity in the reports. The set-bits-in-a-bitmask
approach means that when there are multiple candidate functions, the
report will reflect the match(es) that got the furthest, which seems
correct. Also, we can avoid mentioning "maybe add casts" unless
failure to match argument types is actually the issue.
Extend the same return-a-bitmask approach to OpernameGetCandidates.
The issues around argument names don't apply to operator syntax,
but it still seems worth distinguishing between "there is no
operator of that name" and "we couldn't match the argument types".
While at it, adjust these messages and related ones to more strictly
separate "detail" from "hint", following our message style guidelines'
distinction between those.
Reported-by: Dominique Devienne <ddevienne@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/1756041.1754616558@sss.pgh.pa.us
In our previous discussions around making our regression tests
pass in FIPS mode, we concluded that we didn't need to support
the different error message wording observed with pre-3.0 OpenSSL.
However there are still a few LTS distributions soldiering along
with such versions, and now we have some in the buildfarm.
So let's add the variant expected-files needed to make them happy.
This commit only covers the core regression tests. Previous
discussion suggested that we might need some adjustments in
contrib as well, but it's not totally clear to me what those
would be. Rather than work it out from first principles,
I'll wait to see what the buildfarm shows.
Back-patch to v17 which is the oldest branch that claims
to support this case.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/443709.1757876535@sss.pgh.pa.us
Backpatch-through: 17
JsonConstructorExpr can produce non-NULL output with a NULL input, so
it should be treated as a non-strict construct. Failing to do so can
lead to incorrect query behavior.
For example, in the reported case, when pulling up a subquery that is
under an outer join, if the subquery's target list contains a
JsonConstructorExpr that uses subquery variables and it is mistakenly
treated as strict, it will be pulled up without being wrapped in a
PlaceHolderVar. As a result, the expression will be evaluated at the
wrong place and will not be forced to null when the outer join should
do so.
Back-patch to v16 where JsonConstructorExpr was introduced.
Bug: #19046
Reported-by: Runyuan He <runyuan@berkeley.edu>
Author: Tender Wang <tndrwang@gmail.com>
Co-authored-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/19046-765b6602b0a8cfdf@postgresql.org
Backpatch-through: 16
Previously we built the .map files for GB18030 (version 2000) from an
XML file. The 2022 version for this encoding is only available as a
Unicode Character Mapping (UCM) file, so as preparatory refactoring
switch to this format as the source for building version 2000.
As we do with most input files for the conversion mappings, download
the file on demand. In order to generate the same mappings we have
now, we must download from a previous upstream commit, rather than
the head since the latter contains a correction not present in our
current .map files.
The XML file is still used by EUC_CN, so we cannot delete it from our
repository. GB18030 is a superset of EUC_CN, so it may be possible to
build EUC_CN from the same UCM file, but that is left for future work.
Author: Chao Li <lic@highgo.com>
Discussion: https://postgr.es/m/966d9fc.169.198741fe60b.Coremail.jiaoshuntian%40highgo.com
Fix for commit 3c86223c998. That commit moved the typedef of pg_int64
from postgres_ext.h to libpq-fe.h, because the only remaining place
where it might be used is libpq users, and since the type is obsolete,
the intent was to limit its scope.
The problem is that if someone builds an extension against an
older (pre-PG18) server version and a new (PG18) libpq, they might get
two typedefs, depending on include file order. This is not allowed
under C99, so they might get warnings or errors, depending on the
compiler and options. The underlying types might also be
different (e.g., long int vs. long long int), which would also lead to
errors. This scenario is plausible when using the standard Debian
packaging, which provides only the newest libpq but per-major-version
server packages.
The fix is to undo that part of commit 3c86223c998. That way, the
typedef is in the same header file across versions. At least, this is
the safest fix doable before PostgreSQL 18 releases.
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://www.postgresql.org/message-id/25144219-5142-4589-89f8-4e76948b32db%40eisentraut.org
Previously, pg_dump incorrectly queried pg_seclabel to retrieve security labels
for subscriptions, which are stored in pg_shseclabel as they are global objects.
This could result in security labels for subscriptions not being dumped.
This commit fixes the issue by updating pg_dump to query the pg_seclabels view,
which aggregates entries from both pg_seclabel and pg_shseclabel.
While querying pg_shseclabel directly for subscriptions was an alternative,
using pg_seclabels is simpler and sufficient.
In addition, pg_dump is updated to dump security labels on event triggers,
which were previously omitted.
Backpatch to all supported versions.
Author: Jian He <jian.universality@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CACJufxHCt00pR9h51AVu6+yPD5J7JQn=7dQXxqacj0XyDhc-fA@mail.gmail.com
Backpatch-through: 13
This commit addresses two sources of instability in the 035_conflicts
test:
Unstable VACUUM usage:
The test previously relied on VACUUM to remove a deleted column, which can
be unreliable due to concurrent background writer or checkpoint activity
that may lock the page containing the deleted tuple. Since the test
already verifies that replication_slot.xmin has advanced sufficiently to
confirm the feature's correctness, so, the VACUUM step is removed to
improve test stability.
Timing-sensitive retention resumption check:
The test includes a check to confirm that retention of conflict-relevant
information resumes after setting max_retention_duration to 0. However, in
some cases, the apply worker resumes retention immediately after the
inactive slot is removed from synchronized_standby_slots, even before
max_retention_duration is updated. This can happen if remote changes are
applied in under 1ms, causing the test to timeout while waiting for a
later log position. To ensure consistent behavior, this commit delays the
removal of synchronized_standby_slots until after max_retention_duration
is set to 0.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/TY4PR01MB16907805DE4816E53C54708159414A@TY4PR01MB16907.jpnprd01.prod.outlook.com
Commit 7202d72787d added in passing some const qualifiers, but the one
on the postmaster_child_launch() startup_data argument was incorrect,
because the function itself modifies the pointed-to data. This is
hidden from the compiler because of casts. The qualifiers on the
functions called by postmaster_child_launch() are still correct.
Previously, pg_restore did not skip comments on policies even when
--no-policies was specified. As a result, it could issue COMMENT commands
for policies that were never created, causing those commands to fail.
This commit fixes the issue by ensuring that comments on policies
are also skipped when --no-policies is used.
Backpatch to v18, where --no-policies was added in pg_restore.
Author: Jian He <jian.universality@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CACJufxHCt00pR9h51AVu6+yPD5J7JQn=7dQXxqacj0XyDhc-fA@mail.gmail.com
Backpatch-through: 18
Previously, pg_restore did not skip comments on publications or subscriptions
even when --no-publications or --no-subscriptions was specified. As a result,
it could issue COMMENT commands for objects that were never created,
causing those commands to fail.
This commit fixes the issue by ensuring that comments on publications and
subscriptions are also skipped when the corresponding options are used.
Backpatch to all supported versions.
Author: Jian He <jian.universality@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CACJufxHCt00pR9h51AVu6+yPD5J7JQn=7dQXxqacj0XyDhc-fA@mail.gmail.com
Backpatch-through: 13
Add logic to _bt_set_startikey that determines whether row compare keys
are guaranteed to be satisfied by every tuple on a page that is about to
be read by _bt_readpage. This works in essentially the same way as the
existing scalar inequality logic. Testing has shown that the new logic
improves performance to about the same degree as the existing scalar
inequality logic (compared to the unoptimized case). In other words,
the new logic makes many row compare scans significantly faster.
Note that the new row compare inequality logic is only effective when
the same individual row member is the deciding subkey for all tuples on
the page (obviously, all tuples have to satisfy the row compare, too).
This is what makes the new row compare logic very similar to the
existing logic for scalar inequalities. Note, in particular, that this
makes it safe to ignore whether all row compare members are against
either ASC or DESC index attributes (i.e. it doesn't matter if
individual subkeys don't all use the same inequality strategy).
Also stop refusing to set pstate.startikey to an offset beyond any
nonrequired key (don't add logic that'll do that for an individual row
compare subkey, either). We can fully rely on our firstchangingattnum
tests instead. This will do the right thing when a page has a group of
tuples with NULLs in a lower-order attribute that makes the tuples fail
to satisfy a row compare key -- we won't incorrectly conclude that all
tuples must satisfy the row compare, just because firsttup and lasttup
happen to. Our firstchangingattnum test prevents that from happening.
(Note that the original "avoid evaluating nbtree scan keys" mechanism
added by commit e0b1ee17 couldn't support row compares due to issues
with tuples that contain NULLs in a lower-order subkey's attribute.
That original mechanism relied on requiredness markings, which the
replacement _bt_set_startikey mechanism never really needed.)
Follow up to commit 8a510275, which added the _bt_set_startikey
optimization. _bt_set_startikey is now feature complete; there's no
remaining kind of nbtree scan key that it still doesn't support.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/CAH2-WznL6Z3H_GTQze9d8T_Ls=cYbnd-_9f-Jo7aYgTGRUD58g@mail.gmail.com
fmgr.h defined some types such as fmNodePtr which is just Node *, but
it made its own types to avoid having to include various header files.
With C11, we can now instead typedef the original names without fear
of conflicts.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/10d32190-f31b-40a5-b177-11db55597355@eisentraut.org
There are a number of forward declarations that use struct but not the
customary typedef, because that could have led to repeat typedefs,
which was not allowed. This is now allowed in C11, so we can update
these to provide the typedefs as well, so that the later uses of the
types look more consistent.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/10d32190-f31b-40a5-b177-11db55597355@eisentraut.org
This commit resumes automatic retention of conflict-relevant data for a
subscription. Previously, retention would stop if the apply process failed
to advance its xmin (oldest_nonremovable_xid) within the configured
max_retention_duration and user needs to manually re-enable
retain_dead_tuples option. With this change, retention will resume
automatically once the apply worker catches up and begins advancing its
xmin (oldest_nonremovable_xid) within the configured threshold.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com
If extensions of equal names were installed in different directories
in the path, the views pg_available_extensions and
pg_available_extension_versions would show all of them, even though
only the first one was actually reachable by CREATE EXTENSION. To
fix, have those views skip extensions found later in the path if they
have names already found earlier.
Also add a bit of documentation that only the first extension in the
path can be used.
Reported-by: Pierrick <pierrick.chovelon@dalibo.com>
Discussion: https://www.postgresql.org/message-id/flat/8f5a0517-1cb8-4085-ae89-77e7454e27ba%40dalibo.com
The TimescaleDB extension expects to be able to change an nbtree scan's
keys across rescans. The issue arises in the extension's implementation
of loose index scan. This is arguably a misuse of the index AM API,
though apparently it worked until recently. It stopped working when the
skipScan flag was added to BTScanOpaqueData by commit 8a510275, though.
The flag wouldn't reliably track whether the scan (actually, the current
rescan) has any skip arrays, leading to confusion in _bt_set_startikey.
nbtree preprocessing will now defensively initialize the scan's skipScan
flag in all cases, including the case where _bt_preprocess_array_keys
returns early due to the (re)scan not using arrays. While nbtree isn't
obligated to support this use case (at least not according to my reading
of the index AM API), it still seems like a good idea to be consistent
here, on general robustness grounds.
Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Natalya Aksman <natalya@timescale.com>
Discussion: https://postgr.es/m/CAJumhcirfMojbk20+W0YimbNDkwdECvJprQGQ-XqK--ph09nQw@mail.gmail.com
Backpatch-through: 18
Commit e3ffc3e91 fixed the translation of character classes in
SIMILAR TO regular expressions. Unfortunately the fix broke a corner
case: if there is an escape character right after the opening bracket
(for example in "[\q]"), a closing bracket right after the escape
sequence would not be seen as closing the character class.
There were two more oversights: a backslash or a nested opening bracket
right at the beginning of a character class should remove the special
meaning from any following caret or closing bracket.
This bug suggests that this code needs to be more readable, so also
rename the variables "charclass_depth" and "charclass_start" to
something more meaningful, rewrite an "if" cascade to be more
consistent, and improve the commentary.
Reported-by: Dominique Devienne <ddevienne@gmail.com>
Reported-by: Stephan Springl <springl-psql@bfw-online.de>
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAFCRh-8NwJd0jq6P=R3qhHyqU7hw0BTor3W0SvUcii24et+zAw@mail.gmail.com
Backpatch-through: 13
pg_regress has a --no-locale option that forces the temporary database to
have C locale. However, currently, locale C only exists in the 'builtin'
locale provider. This makes 'pg_regress --no-locale' fail when the default
locale provider is not 'builtin'. This commit makes 'pg_regress --no-locale'
specify both LOCALE='C' and LOCALE_PROVIDER='builtin'.
Discussion: https://postgr.es/m/b54921f95e23b4391b1613e9053a3d58%40postgrespro.ru
Author: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
It's not quite clear to me why this didn't show up in my local
check-world testing, but some of the buildfarm evidently runs
this test with a different database name. Adjust the test
so that that doesn't affect the reported error messages.
If the database or user had no entry in pg_db_role_setting,
RESET silently did nothing --- including not checking the
validity of the given GUC name. This is quite inconsistent
and surprising, because you *would* get such an error if there
were any pg_db_role_setting entry, even though it contains
values for unrelated GUCs.
While this is clearly a bug, changing it in stable branches seems
unwise. The effect will be that some ALTER commands that formerly
were no-ops will now be errors, and people don't like that sort of
thing in minor releases.
Author: Vitaly Davydov <v.davydov@postgrespro.ru>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/30783e-68c28a00-9-41004480@130449754
Commit a0b99fc12 caused pg_event_trigger_dropped_objects()
to not fill the object_name field for schemas, which it
should have; and caused it to fill the object_name field
for default values, which it should not have.
In addition, triggers and RLS policies really should behave
the same way as we're making column defaults do; that is,
they should have is_temporary = true if they belong to a
temporary table.
Fix those things, and upgrade event_trigger.sql's woefully
inadequate test coverage of these secondary output columns.
As before, back-patch only to v15.
Reported-by: Sergey Shinderuk <s.shinderuk@postgrespro.ru>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/bd7b4651-1c26-4d30-832b-f942fabcb145@postgrespro.ru
Backpatch-through: 15
This unblocks rejection of that syntax. One copy was a misspelling of
"SET TABLESPACE pg_default" that instead made no persistent changes.
The other copy just needed to populate a DATABASEOID syscache entry.
This slightly raises database.sql test coverage of catcache.c, while
dbcommands.c coverage remains the same.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1802710.1757608564@sss.pgh.pa.us
A recently added nbtree preprocessing step failed to account for the
fact that DESC columns already had their B-Tree strategy number commuted
at this point in preprocessing. As a result, preprocessing could output
a set of scan keys where one or more keys had the correct strategy
number, but used the wrong comparison routine.
To fix, make the faulty code path that looks up a more restrictive
replacement operator/comparison routine commute its requested inequality
strategy (while outputting the transformed strategy number as before).
This makes the final transformed scan key comport with the approach
preprocessing has always used to deal with DESC columns (which is
described by comments above _bt_fix_scankey_strategy).
Oversight in commit commit b3f1a13f, which made nbtree preprocessing
perform transformations on skip array inequalities that can reduce the
total number of index searches.
Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Natalya Aksman <natalya@timescale.com>
Discussion: https://postgr.es/m/19049-b7df801e71de41b2@postgresql.org
Backpatch-through: 18
Users of logical decoding can encounter an unexpected change of
CurrentResourceOwner and CurrentMemoryContext. The problem is that,
unlike other call sites of RollbackAndReleaseCurrentSubTransaction(), in
reorderbuffer.c we fail to restore the original values of these global
variables after being clobbered by subtransaction abort. This patch
saves the values prior to the call and restores them eventually.
In addition, logical.c and logicalfuncs.c had a hack to restore resource
owner, presumably because of lack of this restore. Remove that.
Instead, because the test coverage here is not very consistent, add an
Assert() to ensure that the resowner is kept identical; this would make
it easy to detect other cases of bugs were we fail to restore resowner
properly. This could be removed later.
This is arguably an old bug, but there appears to be no reason to
backpatch it and it's risky to do so, so refrain for now.
Author: Antonin Houska <ah@cybertec.at>
Reported-by: Mihail Nikalayeu <mihailnikalayeu@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Discussion: https://postgr.es/m/119497.1756892972@localhost
Its size was ~3.8GB before, which sometimes was not enough. OpenBSD CI task
often were failing due to no space left on device. Increase the RAM disk size
to ~4.6 GB.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/CAN55FZ2XVVPJRJmGB2DsL3gOrOinWh=HWvj6GO1cHzJ=6LwTag@mail.gmail.com
Backpatch-through: 18, where openbsd was added to CI
It was defining yyscan_t as a macro while the rest of the code uses a
typedef with #ifdef guards around it. The latter is also what the
flex generated code uses. So it seems best to make it look like those
other places for consistency.
The old way also had a potential for conflict if some code included
multiple headers providing yyscan_t. exprscan.l includes
#include "fe_utils/psqlscan_int.h"
#include "pgbench.h"
and fe_utils/psqlscan_int.h contains
#ifndef YY_TYPEDEF_YY_SCANNER_T
#define YY_TYPEDEF_YY_SCANNER_T
typedef void *yyscan_t;
#endif
which was then followed by pgbench.h
#define yyscan_t void *
and then the generated code in exprscan.c
#ifndef YY_TYPEDEF_YY_SCANNER_T
#define YY_TYPEDEF_YY_SCANNER_T
typedef void* yyscan_t;
#endif
This works, but if the #ifdef guard in psqlscan_int.h is removed, this
fails.
We want to move toward allowing repeat typedefs, per C11, but for that
we need to make sure they are all the same.
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/10d32190-f31b-40a5-b177-11db55597355@eisentraut.org
If someone is stuck behind a lock for more than a second, that is
almost always a problem that is worth a log entry.
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-By: Michael Banck <mbanck@gmx.net>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Reviewed-By: Christoph Berg <myon@debian.org>
Reviewed-By: Stephen Frost <sfrost@snowman.net>
Discussion: https://postgr.es/m/b8b8502915e50f44deb111bc0b43a99e2733e117.camel%40cybertec.at
Per discussion, this compiler suite is no longer maintained, and
it has not been able to compile PostgreSQL since at least PostgreSQL
17.
This removes all the remaining support code for this compiler.
Note that the Solaris operating system continues to be supported, but
using GCC as the compiler.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/a0f817ee-fb86-483a-8a14-b6f7f5991b6e%40eisentraut.org
Clang 21 shows some new compiler warnings, for example:
warning: variable 'dstsize' is uninitialized when passed as a const pointer argument here [-Wuninitialized-const-pointer]
The fix is to initialize the variables when they are defined. This is
similar to, for example, the existing situation in gistKeyIsEQ().
Discussion: https://www.postgresql.org/message-id/flat/6604ad6e-5934-43ac-8590-15113d6ae4b1%40eisentraut.org
The typedef Relids (Bitmapset *) is intended to represent set of
relation identifiers, but was incorrectly used in several places to
store sets of attribute numbers. This is my oversight in e2debb643.
Fix that by replacing such usages with Bitmapset * to reflect the
correct semantics.
Author: Junwang Zhao <zhjwpku@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAEG8a3LJhp_xriXf39iCz0TsK+M-2biuhDhpLC6Baxw8+ZYT3A@mail.gmail.com
hashdesc.c was missing a couple of fields in its record descriptions, as
of:
- is_prev_bucket_same_wrt for SQUEEZE_PAGE.
- procid for INIT_META_PAGE.
- old_bucket_flag and new_bucket_flag for SPLIT_ALLOCATE_PAGE.
The author has noted the first hole, and I have spotted the others while
double-checking this area of the code. Note that the only data missing
now are the offsets stored in VACUUM_ONE_PAGE. We could perhaps add
them, if somebody sees value in this data, even if it makes the output
larger. These are discarded here.
Author: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://postgr.es/m/CALdSSPjc-OVwtZH0Xrkvg7n=2ZwdbMJzqrm_ed_CfjiAzuKVGg@mail.gmail.com
In EXEC_BACKEND builds, GetNamedLWLockTranche() can segfault when
called outside of the postmaster process, as it might access
NamedLWLockTrancheRequestArray, which won't be initialized. Given
the lack of reports, this is apparently unusual, presumably because
it is usually called from a shmem_startup_hook like this:
mystruct = ShmemInitStruct(..., &found);
if (!found)
{
mystruct->locks = GetNamedLWLockTranche(...);
...
}
This genre of shmem_startup_hook evades the aforementioned
segfaults because the struct is initialized in the postmaster, so
all other callers skip the !found path.
We considered modifying the documentation or requiring
GetNamedLWLockTranche() to be called from the postmaster, but
ultimately we decided to simply move the request array to shared
memory (and add it to the BackendParameters struct), thereby
allowing calls outside postmaster on all platforms. Since the main
shared memory segment is initialized after accepting LWLock tranche
requests, postmaster builds the request array in local memory first
and then copies it to shared memory later.
Given the lack of reports, back-patching seems unnecessary.
Reported-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0v1_15QPg5Sqd2Qz5rh_qcsyCeHHmRDY89xVHcy2yt5BQ%40mail.gmail.com
pg_event_trigger_dropped_objects() would report a column default
object with is_temporary = false, even if it belongs to a
temporary table. This seems clearly wrong, so adjust it to
report the table's temp-ness.
While here, refactor EventTriggerSQLDropAddObject to make its
handling of namespace objects less messy and avoid duplication
of the schema-lookup code. And add some explicit test coverage
of dropped-object reports for dependencies of temp tables.
Back-patch to v15. The bug exists further back, but the
GetAttrDefaultColumnAddress function this patch depends on does not,
and it doesn't seem worth adjusting it to cope with the older code.
Author: Antoine Violin <violin.antuan@gmail.com>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAFjUV9x3-hv0gihf+CtUc-1it0hh7Skp9iYFhMS7FJjtAeAptA@mail.gmail.com
Backpatch-through: 15
The comment mistakenly had "the others" for "the other", but this
commit also reorders the comment so it matches the macros below. Now we
describe the levels in increasing strictness. In addition, it seems
easier to follow if we introduce one level at a time, rather than
describing two, followed by "the other" (and then jumping back to one of
the first two). Finally, reword the sentence about the purpose of the
macros, which was slightly off-point.
Author: Paul Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Rustam ALLAKOV <rustamallakov@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CA+renyUp=xja80rBaB6NpY3RRdi750y046x28bo_xg29zKY72Q@mail.gmail.com
This commit adds an isolation test showing that temporal foreign keys do
not permit referential integrity violations under concurrency, like
fk-snapshot-2. You can show that the test fails by passing false for
detectNewRows to ri_PerformCheck in ri_restrict.
Author: Paul Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Rustam ALLAKOV <rustamallakov@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CA+renyUp=xja80rBaB6NpY3RRdi750y046x28bo_xg29zKY72Q@mail.gmail.com
This commit adds a missing isolation test for (non-PERIOD) foreign keys.
With REPEATABLE READ, one transaction can insert a referencing row while
another deletes the referenced row, and both see a valid state. But
after they have committed, the table violates referential integrity.
If the INSERT precedes the DELETE, we use a crosscheck snapshot to see
the just-added row, so that the DELETE can raise a foreign key error.
You can see the table violate referential integrity if you change
ri_restrict to pass false for detectNewRows to ri_PerformCheck.
A crosscheck snapshot is not needed when the DELETE comes first, because
the INSERT's trigger takes a FOR KEY SHARE lock that sees the row now
marked for deletion, waits for that transaction to commit, and raises a
serialization error. I (Paul) added a test for that too though.
We already have a similar test (in ri-triggers.spec) for SERIALIZABLE
snapshot isolation showing that you can implement foreign keys with just
pl/pgSQL, but that test does nothing to validate ri_triggers.c. We also
have tests (in fk-snapshot.spec) for other concurrency scenarios, but
not this one: we test concurrently deleting both the referencing and
referenced row, when the constraint activates a cascade/set null action.
But those tests don't exercise ri_restrict, and the consequence of
omitting a crosscheck comparison is different: a serialization failure,
not a referential integrity violation.
Author: Paul Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Rustam ALLAKOV <rustamallakov@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CA+renyUp=xja80rBaB6NpY3RRdi750y046x28bo_xg29zKY72Q@mail.gmail.com
The Sun Studio compiler complains about an empty declaration here.
Note for future historians: This does not mean that this compiler is
still of current interest for anyone using PostgreSQL. But we can let
this small fix be its parting gift.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/a0f817ee-fb86-483a-8a14-b6f7f5991b6e%40eisentraut.org
The test assumes that a backend will execute COMMIT PREPARED on the
publisher and hit the injection point commit-after-delay-checkpoint within
the commit critical section. This should cause the apply worker on the
subscriber to wait for the transaction to complete.
However, the test does not guarantee that the injection point is actually
triggered, creating a race condition where the apply worker may proceed
prematurely during COMMIT PREPARED.
This commit resolves the issue by explicitly waiting for the injection
point to be hit before continuing with the test, ensuring consistent and
reliable behavior.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Discussion: https://postgr.es/m/TY4PR01MB1690751D1CA8C128B0770EC6F9409A@TY4PR01MB16907.jpnprd01.prod.outlook.com
hash_xlog.h included descriptions for the blocks used in WAL records
that were was not completely consistent with how the records are
generated, with one block missing for SQUEEZE_PAGE, and inconsistent
descriptions used for block 0 in VACUUM_ONE_PAGE and MOVE_PAGE_CONTENTS.
This information was incorrect since c11453ce0aea, cross-checking the
logic for the record generation.
Author: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/CALdSSPj1j=a1d1hVA3oabRFz0hSU3KKrYtZPijw4UPUM7LY9zw@mail.gmail.com
Backpatch-through: 13
GucSource_Names was documented as being in guc.c, but since 0a20ff54f5e6
it is located in guc_tables.c. The reference to the location of
GucSource_Names is important, as GucSource needs to be kept in sync with
GucSource_Names.
Author: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/CAKFQuwYPgAHWPYjPzK7iXzhSZ6MKR8w20_Nz7ZXpOvx=kZbs7A@mail.gmail.com
Backpatch-through: 16
If sizeof(Pointer) is 4 then sizeof(SortItem) will be 12, so that
if data->numrows is odd then we placed the values array at a location
that's not a multiple of 8. That was fine when sizeof(Datum) was also
4, but in the wake of commit 2a600a93c it makes some alignment-picky
machines unhappy. (You need a 32-bit machine that nonetheless expects
8-byte alignment of 8-byte quantities, which is an odd-seeming
combination but it does exist outside the Intel universe.)
To fix, MAXALIGN the space allocated to the SortItem array.
In passing, let's make the "len" variable be Size not int,
just for paranoia's sake.
This code was arguably not too safe even before 2a600a93c, but at
present I don't see a strong argument for back-patching.
Reported-by: Tomas Vondra <tomas@vondra.me>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/87036018-8d70-40ad-a0ac-192b07bd7b04@vondra.me
Instead of building a separate memory context that's used just
for running hash functions, make the hash functions run in the
per-tuple context of the node's innerecontext. This saves a
little space at runtime, and it avoids needing to reset two
contexts instead of one inside buildSubPlanHash's main loop.
This largely reverts commit 133924e13. That's safe to do now
because bf6c614a2 decoupled the evaluation context used by
TupleHashTableMatch from that used for hash function evaluation,
so that there's no longer a risk of resetting the innerecontext
too soon.
Per discussion of bug #19040, although this is not directly
a fix for that.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Haiyang Li <mohen.lhy@alibaba-inc.com>
Reviewed-by: Fei Changhong <feichanghong@qq.com>
Discussion: https://postgr.es/m/19040-c9b6073ef814f48c@postgresql.org
If the hash functions used for hashing tuples leaked any memory,
we failed to clean that up, resulting in query-lifespan memory
leakage in queries using hashed subplans. One way that could
happen is if the values being hashed require de-toasting, since
most of our hash functions don't trouble to clean up de-toasted
inputs.
Prior to commit bf6c614a2, this leakage was largely masked
because TupleHashTableMatch would reset hashtable->tempcxt
(via execTuplesMatch). But it doesn't do that anymore, and
that's not really the right place for this anyway: doing it
there could reset the tempcxt many times per hash lookup,
or not at all. Instead put reset calls into ExecHashSubPlan
and buildSubPlanHash. Along the way to that, rearrange
ExecHashSubPlan so that there's just one place to call
MemoryContextReset instead of several.
This amounts to accepting the de-facto API spec that the caller
of the TupleHashTable routines is responsible for resetting the
tempcxt adequately often. Although the other callers seem to
get this right, it was not documented anywhere, so add a comment
about it.
Bug: #19040
Reported-by: Haiyang Li <mohen.lhy@alibaba-inc.com>
Author: Haiyang Li <mohen.lhy@alibaba-inc.com>
Reviewed-by: Fei Changhong <feichanghong@qq.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19040-c9b6073ef814f48c@postgresql.org
Backpatch-through: 13
autoconf builds have compiled this file with -ftree-vectorize since
commit 8870917623, but meson builds seem to have missed the memo.
Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Discussion: https://postgr.es/m/aL85CeasM51-0D1h%40nathan
Backpatch-through: 16
All the calls replaced by this commit use 4-byte integers for their
variables used in input of my_log2(). Hence, the limit against
too-large inputs does not really apply. Thresholds are also applied, as
of:
- In nodeAgg.c, the number of partitions is limited by
HASHAGG_MAX_PARTITIONS.
- In nodeHash.c, ExecChooseHashTableSize() caps its maximum number of
buckets based on HashJoinTuple and palloc() allocation limit.
- In worker.c, the number of subxacts tracked by ApplySubXactData uses
uint32, making pg_ceil_log2_64() safe to use directly.
Several approaches have been discussed, like an integration with
thresholds in pg_bitutils.h, but it was found confusing. This uses
Dean's idea, which gives a simpler result than what I came up with to be
able to remove dynahash.h. dynahash.h will be removed in a follow-up
commit, removing some duplication with the ceil log2 routines.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/CAEZATCUJPQD_7sC-wErak2CQGNa6bj2hY-mr8wsBki=kX7f2_A@mail.gmail.com
The startup process does not process shared invalidation messages, only
sending them, and never calls AtEOXact_SMgr() which clean up any
unpinned SMgrRelations. Hence, it is never able to free SMgrRelations
on a periodic basis, bloating its hashtable over time.
Like the checkpointer and the bgwriter, this commit takes a conservative
approach by freeing periodically SMgrRelations when replaying a
checkpoint record, either online or shutdown, so as the startup process
has a way to perform a periodic cleanup.
Issue caused by 21d9c3ee4ef7, so backpatch down to v17.
Author: Jingtang Zhang <mrdrivingduck@gmail.com>
Reviewed-by: Yuhang Qiu <iamqyh@gmail.com>
Discussion: https://postgr.es/m/28C687D4-F335-417E-B06C-6612A0BD5A10@gmail.com
Backpatch-through: 17
This section claims that each backend executes the
shmem_startup_hook shortly after attaching to shared memory, which
is true for EXEC_BACKEND builds, but not for others. This commit
adds this important detail.
Oversight in commit 964152c476.
Reported-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0vEGT1eigGbVt604LkXP6mUPMwPMxQoRCbFny44w%2B9EUQ%40mail.gmail.com
Backpatch-through: 17
Currently, test_slru's shmem_startup_hook unconditionally generates
new LWLock tranche IDs. This is fine on non-EXEC_BACKEND builds,
where only the postmaster executes this hook, but on EXEC_BACKEND
builds, every backend executes it, too. To fix, only generate the
tranche IDs in the postmaster process by checking the
IsUnderPostmaster variable.
This is arguably a bug fix and could be back-patched, but since the
damage is limited to some extra unused tranche IDs in a test
module, I'm not going to bother.
Reported-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0vaAuonaf12CeDddQJu5xKL%2B6xVyS%2B_q1%2BcH%3D33JXV82w%40mail.gmail.com
This adds 3 new variants of the random() function:
random(min date, max date) returns date
random(min timestamp, max timestamp) returns timestamp
random(min timestamptz, max timestamptz) returns timestamptz
Each returns a random value x in the range min <= x <= max.
Author: Damien Clochard <damien@dalibo.info>
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Vik Fearing <vik@postgresfriends.org>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/f524d8cab5914613d9e624d9ce177d3d@dalibo.info
Address a potential SIGSEGV that may occur when the tablesync worker
attempts to locate a deleted row while applying changes. This situation
arises during conflict detection for update-deleted scenarios.
To prevent this crash, ensure that the operation is errored out early if
the leader apply worker is unavailable. Since the leader worker maintains
the necessary conflict detection metadata, proceeding without it serves no
purpose and risks reporting incorrect conflict type.
In the passing, improve a nearby comment.
Reported by Tom Lane as per Coverity
Author: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/334468.1757280992@sss.pgh.pa.us
Commit fd6ec93bf890314a and other previous work established the
principle that when an error is potentially reachable in case of on-disk
corruption but is not expected to be reached otherwise,
ERRCODE_DATA_CORRUPTED should be used. This allows log monitoring
software to search for evidence of corruption by filtering on the error
code.
Enhance the existing log messages emitted when the heap page is found to
be inconsistent with the VM by adding this error code.
Suggested-by: Andrey Borodin <x4mmm@yandex-team.ru>
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/87DD95AA-274F-4F4F-BAD9-7738E5B1F905%40yandex-team.ru
Commit 161a3e8b68 taught pg_upgrade to use COPY for large object
metadata for upgrades from v12 and newer, which is much faster to
restore than the proper large object commands. For upgrades from
v16 and newer, we can take this a step further and transfer the
large object metadata files as if they were user tables. We can't
transfer the files from older versions because the aclitem data
type (needed by pg_largeobject_metadata.lomacl) changed its storage
format in v16 (see commit 7b378237aa). Note that this commit is
essentially a revert of commit 12a53c732c.
There are a couple of caveats. First, we still need to COPY the
corresponding pg_shdepend rows for large objects. Second, we need
to COPY anything in pg_largeobject_metadata with a comment or
security label, else restoring those will fail. This means that an
upgrade in which every large object has a comment or security label
won't gain anything from this commit, but it should at least avoid
making those unusual use-cases any worse.
pg_upgrade must also take care to transfer the relfilenodes of
pg_largeobject_metadata and its index, as was done for
pg_largeobject in commits d498e052b4 and bbe08b8869.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aJ3_Gih_XW1_O2HF%40nathan
f83d709760d8 refactored xl_heap_prune and added an unused member,
reason. While PruneReason is used when constructing this WAL record to
set the WAL record definition, it doesn't need to be stored in a
separate field in the record. Remove it.
We won't backport this, since modifying an exposed struct definition to
remove an unused field would do more harm than good.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reported-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/tvvtfoxz5ykpsctxjbzxg3nldnzfc7geplrt2z2s54pmgto27y%40hbijsndifu45
rte->alias should point only to a user-written alias, but in these
cases that principle was violated. Fixing this causes some regression
test output changes: wherever rte->alias previously had a value and
is now NULL, rte->eref is now set to a generated name rather than to
rte->alias; and the scheme used to generate eref names differs from
what we were doing for aliases.
The upshot is that instead of "*SELECT*" or "*SELECT* %d",
EXPLAIN will now emit "unnamed_subquery" or "unnamed_subquery_%d".
But that's a reasonable descriptor, and we were already producing
that in yet other cases, so this seems not too objectionable.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Co-authored-by: Robert Haas <rhaas@postgresql.org>
Discussion: https://postgr.es/m/CA+TgmoYSYmDA2GvanzPMci084n+mVucv0bJ0HPbs6uhmMN6HMg@mail.gmail.com
Previously, heap_xlog_visible() called visibilitymap_pin() even after
getting a buffer from XLogReadBufferForRedoExtended() -- which returns a
pinned buffer containing the specified block of the visibility map.
This would just have resulted in visibilitymap_pin() returning early
since the specified page was already present and pinned, but it was
confusing extraneous code, so remove it. It doesn't seem worth
backporting, though.
It appears to be an oversight in 2c03216.
While we are at it, remove two VM-related redundant asserts in the COPY
FREEZE code path. visibilitymap_set() already asserts that
PD_ALL_VISIBLE is set on the heap page and checks that the vmbuffer
contains the bits corresponding to the specified heap block, so callers
do not also need to check this.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reported-by: Melanie Plageman <melanieplageman@gmail.com>
Reported-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CALdSSPhu7WZd%2BEfQDha1nz%3DDC93OtY1%3DUFEdWwSZsASka_2eRQ%40mail.gmail.com
A test has been added to ensure that conflict-relevant data is not
prematurely removed when a concurrent prepared transaction is being
committed on the publisher.
This test introduces an injection point that simulates the presence of a
prepared transaction in the commit phase, validating that the system
correctly delays conflict slot advancement until the transaction is fully
committed.
Additionally, the test serves as a safeguard for developers, ensuring that
the acquisition of the commit timestamp does not occur before marking
DELAY_CHKPT_IN_COMMIT in RecordTransactionCommitPrepared.
Reported-by: Robert Haas <robertmhaas@gmail.com>
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/OS9PR01MB16913F67856B0DA2A909788129400A@OS9PR01MB16913.jpnprd01.prod.outlook.com
A new pgstats entry is created as a two-step process:
- The entry is looked at in the shared hashtable of pgstats, and is
inserted if not found.
- When not found and inserted, its fields are then initialized. This
part include a DSA chunk allocation for the stats data of the new entry.
As currently coded, if the DSA chunk allocation fails due to an
out-of-memory failure, an ERROR is generated, leaving in the pgstats
shared hashtable an inconsistent entry due to the first step, as the
entry has already been inserted in the hashtable. These broken entries
can then be found by other backends, crashing them.
There are only two callers of pgstat_init_entry(), when loading the
pgstats file at startup and when creating a new pgstats entry. This
commit changes pgstat_init_entry() so as we use dsa_allocate_extended()
with DSA_ALLOC_NO_OOM, making it return NULL on allocation failure
instead of failing. This way, a backend failing an entry creation can
take appropriate cleanup actions in the shared hashtable before throwing
an error. Currently, this means removing the entry from the shared
hashtable before throwing the error for the allocation failure.
Out-of-memory errors unlikely happen in the wild, and we do not bother
with back-patches when these are fixed, usually. However, the problem
dealt with here is a degree worse as it breaks the shared memory state
of pgstats, impacting other processes that may look at an inconsistent
entry that a different process has failed to create.
Author: Mikhail Kot <mikhail.kot@databricks.com>
Discussion: https://postgr.es/m/CAAi9E7jELo5_-sBENftnc2E8XhW2PKZJWfTC3i2y-GMQd2bcqQ@mail.gmail.com
Backpatch-through: 15
This commit fixes three issues:
1) When a disabled subscription is created with retain_dead_tuples set to true,
the launcher is not woken up immediately, which may lead to delays in creating
the conflict detection slot.
Creating the conflict detection slot is essential even when the subscription is
not enabled. This ensures that dead tuples are retained, which is necessary for
accurately identifying the type of conflict during replication.
2) Conflict-related data was unnecessarily retained when the subscription does
not have a table.
3) Conflict-relevant data could be prematurely removed before applying
prepared transactions on the publisher that are in the commit critical section.
This issue occurred because the backend executing COMMIT PREPARED was not
accounted for during the computation of oldestXid in the commit phase on
the publisher. As a result, the subscriber could advance the conflict
slot's xmin without waiting for such COMMIT PREPARED transactions to
complete.
We fixed this issue by identifying prepared transactions that are in the
commit critical section during computation of oldestXid in commit phase.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/OS9PR01MB16913DACB64E5721872AA5C02943BA@OS9PR01MB16913.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/OS9PR01MB16913F67856B0DA2A909788129400A@OS9PR01MB16913.jpnprd01.prod.outlook.com
This commit allows to log the raw parse tree in the same way we
currently log the parse tree, rewritten tree, and plan tree.
To avoid unnecessary log noise for users not interested in this
detail, a new GUC option, "debug_print_raw_parse", has been added.
When starting the PostgreSQL process with "-d N", and N is 3 or higher,
debug_print_raw_parse is enabled automatically, alongside
debug_print_parse.
Author: Chao Li <lic@highgo.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Tatsuo Ishii <ishii@postgresql.org>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Discussion: https://postgr.es/m/CAEoWx2mcO0Gpo4vd8kPMAFWeJLSp0MeUUnaLdE1x0tSVd-VzUw%40mail.gmail.com
This set of changes removes the list of available buffers and instead simply
uses the clock-sweep algorithm to find and return an available buffer. This
also removes the have_free_buffer() function and simply caps the
pg_autoprewarm process to at most NBuffers.
While on the surface this appears to be removing an optimization it is in fact
eliminating code that induces overhead in the form of synchronization that is
problematic for multi-core systems.
The main reason for removing the freelist, however, is not the moderate
improvement in scalability, but that having the freelist would require
dedicated complexity in several upcoming patches. As we have not been able to
find a case benefiting from the freelist...
Author: Greg Burd <greg@burd.me>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/70C6A5B5-2A20-4D0B-BC73-EB09DD62D61C@getmailspring.com
Add an assert to visibilitymap_set() that the provided heap buffer is
exclusively locked, which is expected.
Also, enhance the debug logging message to specify which VM flags were
set.
Based on a related suggestion by Kirill Reshke on an in-progress
patchset.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CALdSSPhAU56g1gGVT0%2BwG8RrSWE6qW8TOfNJS1HNAWX6wPgbFA%40mail.gmail.com
When executing a MERGE UPDATE action, if there is more than one
concurrent update of the target row, the lock-and-retry code would
sometimes incorrectly identify the latest version of the target tuple,
leading to incorrect results.
This was caused by using the ctid field from the TM_FailureData
returned by table_tuple_lock() in a case where the result was TM_Ok,
which is unsafe because the TM_FailureData struct is not guaranteed to
be fully populated in that case. Instead, it should use the tupleid
passed to (and updated by) table_tuple_lock().
To reduce the chances of similar errors in the future, improve the
commentary for table_tuple_lock() and TM_FailureData to make it
clearer that table_tuple_lock() updates the tid passed to it, and most
fields of TM_FailureData should not be relied on in non-failure cases.
An exception to this is the "traversed" field, which is set in both
success and failure cases.
Reported-by: Dmitry <dsy.075@yandex.ru>
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/1570d30e-2b95-4239-b9c3-f7bf2f2f8556@yandex.ru
Backpatch-through: 15
SlruRecentlyUsed() is an inline function since 53c2a97a9266, not a
macro. The description of long_segment_names was missing at the top of
SimpleLruInit(), part forgotten in 4ed8f0913bfd.
Author: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/aLpBLMOYwEQkaleF@jrouhaud
Backpatch-through: 17
This commit changes some functions related to the data type numeric to
use the soft error reporting rather than a custom boolean flag (called
"have_error") that callers of these functions could rely on to bypass
the generation of ERROR reports, letting the callers do their own error
handling (timestamp, jsonpath and numeric_to_char() require them).
This results in the removal of some boilerplate code that was required
to handle both the ereport() and the "have_error" code paths bypassing
ereport(), unifying everything under the soft error reporting facility.
While on it, some duplicated error messages are removed. The function
upgraded in this commit were suffixed with "_opt_error" in their names.
They are renamed to "_safe" instead.
This change relies on d9f7f5d32f20, that has introduced the soft error
reporting infrastructure.
Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/CAAJ_b96No5h5tRuR+KhcC44YcYUCw8WAHuLoqqyyop8_k3+JDQ@mail.gmail.com
pg_lsn includes pg_lsn_in_internal() for the purpose of parsing a LSN
position for the GUC recovery_target_lsn (21f428ebde39). It relies on a
boolean called "have_error" that would be set when the LSN parsing
fails, then let its callers handle any errors.
d9f7f5d32f20 has added support for soft error reporting. This commit
removes some boilerplate code and switches the routine to use soft error
reporting directly, giving to the callers of pg_lsn_in_internal()
the possibility to be fed the error message generated on failure.
pg_lsn_in_internal() routine is renamed to pg_lsn_in_safe(), for
consistency with other similar routines that are given an escontext.
Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/CAAJ_b96No5h5tRuR+KhcC44YcYUCw8WAHuLoqqyyop8_k3+JDQ@mail.gmail.com
Commit 38b602b028 modified this function to allocate enough space
for MAX_NAMED_TRANCHES (256) requests, which is likely far more
than most clusters need. This commit reverts that change so that
it first allocates enough space for only 16 requests and resizes
the array when necessary. While at it, remove the check for too
many tranches from this function. We can now rely on
InitializeLWLocks() to do that check via its calls to
LWLockNewTrancheId() for the named tranches.
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/aLmzwC2dRbqk14y6%40nathan
When executing a MERGE, check that the target relation supports all
actions mentioned in the MERGE command. Specifically, check that it
has a REPLICA IDENTITY if it publishes updates or deletes and the
MERGE command contains update or delete actions. Failing to do this
can silently break replication.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Tested-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/OS3PR01MB57180C87E43A679A730482DF94B62@OS3PR01MB5718.jpnprd01.prod.outlook.com
Backpatch-through: 15
If an INSERT has an ON CONFLICT DO UPDATE clause, the executor must
check that the target relation supports UPDATE as well as INSERT. In
particular, it must check that the target relation has a REPLICA
IDENTITY if it publishes updates. Formerly, it was not doing this
check, which could lead to silently breaking replication.
Fix by adding such a check to CheckValidResultRel(), which requires
adding a new onConflictAction argument. In back-branches, preserve ABI
compatibility by introducing a wrapper function with the original
signature.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Tested-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/OS3PR01MB57180C87E43A679A730482DF94B62@OS3PR01MB5718.jpnprd01.prod.outlook.com
Backpatch-through: 13
The counters saved from pgWalUsage, used for the difference calculations
when flushing the backend WAL stats, are updated when calling
pgstat_flush_backend() under PGSTAT_BACKEND_FLUSH_WAL, and not
pgstat_report_wal(). The comment updated in this commit referenced the
latter, but it is perfectly OK to flush the backend stats independently
of the WAL stats.
Noticed while looking at this area of the code, introduced by
76def4cdd7c2 as a copy-pasto.
Backpatch-through: 18
There are many places in this test program that need to consume a
PGresult while checking that its PQresultStatus is as-expected, or
related tasks such as checking that PQgetResult has nothing more to
return. These tasks were open-coded in a rather inconsistent way,
leading to some outright bugs, some memory leakage, and frequent
inconsistencies about what would be reported in event of an error.
Invent a few helper functions to standardize the behavior and
reduce code duplication. Also, rename the one pre-existing helper
function from confirm_query_canceled to consume_query_cancel, per
Álvaro's suggestion that "confirm" is a poor choice of verb for a
function that will discard the PGresult.
While at it, clean up assorted other places that were leaking
PGresults or even server connections. This is pure neatnik-ism,
since the test doesn't run long enough for those leaks to be of
any real-world concern.
While this fixes some things that are clearly bugs, it's only
a test program, and none of the bugs seem serious enough to
justify back-patching.
Bug: #18960
Reported-by: Dmitry Kovalenko <d.kovalenko@postgrespro.ru>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/18960-09cd4a5100152e58@postgresql.org
There are two ways for shared libraries to allocate their own
LWLock tranches. One way is to call RequestNamedLWLockTranche() in
a shmem_request_hook, which requires the library to be loaded via
shared_preload_libraries. The other way is to call
LWLockNewTrancheId(), which is not subject to the same
restrictions. However, LWLockNewTrancheId() does require each
backend to store the tranche's name in backend-local memory via
LWLockRegisterTranche(). This API is a little cumbersome and leads
to things like unhelpful pg_stat_activity.wait_event values in
backends that haven't loaded the library.
This commit moves these LWLock tranche names to shared memory, thus
eliminating the need for each backend to call
LWLockRegisterTranche(). Instead, the tranche name must be
provided to LWLockNewTrancheId(), which immediately makes the name
available to all backends. Since the tranche name array is
append-only, lookups can ordinarily avoid locking as long as their
local copy of the LWLock counter is greater than the requested
tranche ID.
One downside of this approach is that we now have a hard limit on
both the length of tranche names (NAMEDATALEN-1 bytes) and the
number of dynamically-allocated tranches (256). Besides a limit of
NAMEDATALEN-1 bytes for tranche names registered via
RequestNamedLWLockTranche(), no such limits previously existed. We
could avoid these new limits by using dynamic shared memory, but
the complexity involved didn't seem worth it. We briefly
considered making the tranche limit user-configurable but
ultimately decided against that, too. Since there is still a lot
of time left in the v19 development cycle, it's possible we will
revisit this choice.
Author: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Rahila Syed <rahilasyed90@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CAA5RZ0vvED3naph8My8Szv6DL4AxOVK3eTPS0qXsaKi%3DbVdW2A%40mail.gmail.com
Meson's "auto" feature mode silently disables features with missing
prerequisites, which is nice for development but can lead to false
positives in the CI (such as my commit b0635bfda, which broke OAuth
detection on OpenBSD). Use an explicit feature list in the Cirrus config
instead; this mirrors the --with-XXX experience of Autoconf.
While we're here, also move common configuration options into a single
variable, MESON_COMMON_PG_CONFIG_ARGS, as suggested by Peter. The
resulting hierarchy is as follows:
MESON_COMMON_PG_CONFIG_ARGS "global" Meson configuration options
MESON_COMMON_FEATURES the default set of CI features, to be used
unless there's a specific reason not to
MESON_FEATURES per-OS feature configuration, overriding
the above
The current exceptions to the use of MESON_COMMON_FEATURES are
- SanityCheck, which uses almost no dependencies;
- Windows - VS, whose feature list has diverged significantly from the
others; and
- Linux, which continues to use 'auto' features so that autodetection is
still tested in the CI. (Options shared between 64- and 32-bit builds
can go into LINUX_MESON_FEATURES instead.)
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Suggested-by: Jacob Champion <jacob.champion@enterprisedb.com>
Suggested-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/flat/CAN55FZ0aO8d_jkyRijcGP8qO%3DXH09qG%3Dpw0ZZDvB4LMzuXYU1w%40mail.gmail.com
The PG_TEST_EXTRA environment variable is already set at the top level.
As of 3d1aec225, Meson tasks will use this by default, so there's no
need for another intermediate variable.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Suggested-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/flat/CAN55FZ0aO8d_jkyRijcGP8qO%3DXH09qG%3Dpw0ZZDvB4LMzuXYU1w%40mail.gmail.com
Commit 63599896545 had it so that the parameter "debug_discard_caches"
did not exist unless DISCARD_CACHES_ENABLED was defined (typically via
enabling asserts). This was a mistake, it did not correspond to the
prior setup. Several tests use this parameter, so they were now
failing if you did not have asserts enabled.
Store the information in guc_tables.c in a .dat file similar to the
catalog data in src/include/catalog/, and generate a part of
guc_tables.c from that. The goal is to make it easier to edit that
information, and to be able to make changes to the downstream data
structures more easily. (Essentially, those are the same reasons as
for the original adoption of the .dat format.)
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: David E. Wheeler <david@justatheory.com>
Discussion: https://www.postgresql.org/message-id/flat/dae6fe89-1e0c-4c3f-8d92-19d23374fb10%40eisentraut.org
SubPlan nodes are typically built very early, before any RelOptInfos
have been constructed for the parent query level. As a result, the
simple_rel_array in the parent root has not yet been initialized.
Currently, during cost estimation of a SubPlan's testexpr, we may call
examine_variable() to look up statistical data about the expressions.
This can lead to "no relation entry for relid" errors.
To fix, pass root as NULL to cost_qual_eval() in cost_subplan(), since
the root does not yet contain enough information to safely consult
statistics.
One exception is SubPlan nodes built for the initplans of MIN/MAX
aggregates from indexes. In this case, having a NULL root is safe
because testexpr will be NULL. Additionally, an initplan will by
definition not consult anything from the parent plan.
Backpatch to all supported branches. Although the reported call path
that triggers this error is not reachable prior to v17, there's no
guarantee that other code paths -- especially in extensions -- could
not encounter the same issue when cost_qual_eval() is called with a
root that lacks a valid simple_rel_array. The test case is not
included in pre-v17 branches though.
Bug: #19037
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Diagnosed-by: Tom Lane <tgl@sss.pgh.pa.us>
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19037-3d1c7bb553c7ce84@postgresql.org
Backpatch-through: 13
PQtrace() was generating its output for non-printable characters without
casting the characters printed with unsigned char, leading to some extra
"\xffffff" generated in the output due to the fact that char may be
signed.
Oversights introduced by commit 198b3716dba6, so backpatch down to v14.
Author: Ran Benita <ran@unusedvar.com>
Discussion: https://postgr.es/m/a3383211-4539-459b-9d51-95c736ef08e0@app.fastmail.com
Backpatch-through: 14
SLRU bank locks are referred as "bank locks" or "SLRU bank locks" in the
code comments. The comments updated in this commit use the latter term.
Oversight in 53c2a97a9266, that has replaced the single ControlLock by
the bank control locks.
Author: Julien Rouhaud <julien.rouhaud@free.fr>
Discussion: https://postgr.es/m/aLUT2UO8RjJOzZNq@jrouhaud
Backpatch-through: 17
COPY TO does not support a WHERE clause, and currently fails with the error:
ERROR: WHERE clause not allowed with COPY TO
Since the intended behavior can be achieved by using
COPY (SELECT ... WHERE ...) TO, this commit adds a HINT
to the error message:
HINT: Try the COPY (SELECT ... WHERE ...) TO variant.
This makes the error more informative and helps users
quickly find the alternative usage.
Author: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Reviewed-by: Jim Jones <jim.jones@uni-muenster.de>
Discussion: https://postgr.es/m/3520c224c5ffac0113aef84a9179f37e@oss.nttdata.com
Note that this doesn't require bumping SLOT_VERSION because we
require sizeof(bool) == 1, thanks to commit 97525bc5c8.
Overight in commit ddd5f4f54a.
Discussion: Ranier Vilela <ranier.vf@gmail.com>
Previously, duplicate labels in CREATE TYPE AS ENUM were caught by
the unique index on pg_enum, resulting in a generic error message.
While this was evidently intentional, it's not terribly user-friendly,
nor consistent with the ALTER TYPE cases which take more care with
such errors. This patch adds an explicit check to produce a more
user-friendly and descriptive error message.
A potential objection to this implementation is that it adds O(N^2)
work to the creation operation. However, quick testing finds that
that's pretty negligible below 1000 enum labels, and tolerable even
at 10000. So it doesn't really seem worth being smarter.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Rahila Syed <rahilasyed90@gmail.com>
Reviewed-by: Jim Jones <jim.jones@uni-muenster.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/20250704000402.37e605ab0c59c300965a17ee@sraoss.co.jp
This change replaces seven functions definitions by macros, reducing a
bit some repetitive patterns in the code. An interesting side effect is
that this removes an inconsistency in the naming of SLRU increment
functions with the field names.
This change is similar to 850f4b4c8cab, 8018ffbf5895 or 83a1a1b56645.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aLHA//gr4dTpDHHC@ip-10-97-1-34.eu-west-3.compute.internal
This commit introduces a new subscription parameter,
max_retention_duration, aimed at mitigating excessive accumulation of dead
tuples when retain_dead_tuples is enabled and the apply worker lags behind
the publisher.
When the time spent advancing a non-removable transaction ID exceeds the
max_retention_duration threshold, the apply worker will stop retaining
conflict detection information. In such cases, the conflict slot's xmin
will be set to InvalidTransactionId, provided that all apply workers
associated with the subscription (with retain_dead_tuples enabled) confirm
the retention duration has been exceeded.
To ensure retention status persists across server restarts, a new column
subretentionactive has been added to the pg_subscription catalog. This
prevents unnecessary reactivation of retention logic after a restart.
The conflict detection slot will not be automatically re-initialized
unless a new subscription is created with retain_dead_tuples = true, or
the user manually re-enables retain_dead_tuples.
A future patch will introduce support for automatic slot re-initialization
once at least one apply worker confirms that the retention duration is
within the configured max_retention_duration.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com
Several statements need to reference the current connection's current
database name and current port value. Until now, this has been
accomplished by creating dynamic SQL statements inside of a DO block,
which is not as easy to parse. It also takes away some of the
granularity of any error messages that might occur, making debugging
harder.
By capturing the connection-specific settings into psql variables, it
becomes possible to write simpler SQL statements for the FDW objects.
This eliminates most of DO blocks used in this test, making it a bit
more readable and shorter.
Author: Author: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://postgr.es/m/CADkLM=cpUiJ3QF7aUthTvaVMmgQcm7QqZBRMDLhBRTR+gJX-Og@mail.gmail.com
Constraint expressions and statistics expressions loaded from the
system catalogs need to be run through const-simplification, because
the planner will be comparing them to similarly-processed qual
clauses. Without this step, the planner may fail to detect valid
matches.
Currently, NullTest clauses in these expressions may not be reduced
correctly during const-simplification. This happens because their Var
nodes do not yet have the correct varno when eval_const_expressions is
applied. Since eval_const_expressions relies on varno to reduce
NullTest quals, incorrect varno can cause problems.
Additionally, for statistics expressions, eval_const_expressions is
called with root set to NULL, which also inhibits NullTest reduction.
This patch fixes the issue by ensuring that Vars are updated to have
the correct varno before const-simplification, and that a valid root
is passed to eval_const_expressions when needed.
Author: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/19007-4cc6e252ed8aa54a@postgresql.org
A proposed patch would place a limit of NAMEDATALEN-1 (i.e., 63)
bytes on the names of dynamically-allocated LWLock tranches, but
GetNamedDSA() and GetNamedDSHash() may register tranches with
longer names. This commit lowers the maximum DSM registry entry
name length to NAMEDATALEN-1 bytes and modifies GetNamedDSHash() to
create only one tranche, thereby allowing us to keep the DSM
registry's tranche names below NAMEDATALEN bytes.
Author: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/aKzIg1JryN1qhNuy%40nathan
Show the requested lock level and the object being waited on,
in the same format we use for deadlock reports and similar errors.
This is particularly helpful for debugging lock-timeout errors,
since otherwise the user has very little to go on about which
lock timed out. The performance cost of setting up the callback
should be negligible compared to the other tracing support already
present in WaitOnLock.
As in the deadlock-report case, we just show numeric object OIDs,
because it seems too scary to try to perform catalog lookups
in this context.
Reported-by: Steve Baldwin <steve.baldwin@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1602369.1752167154@sss.pgh.pa.us
Compression in pg_dump is abstracted using an API with multiple
implementations which can be selected at runtime by the user.
The API and its implementations have evolved over time, notable
commits include bf9aa490db, e9960732a9, 84adc8e20, and 0da243fed.
The errorhandling defined by the API was however problematic and
the implementations had a few bugs and/or were not following the
API specification. This commit modifies the API to ensure that
callers can perform errorhandling efficiently and fixes all the
implementations such that they all implement the API in the same
way. A full list of the changes can be seen below.
* write_func:
- Make write_func throw an error on all error conditions. All
callers of write_func were already checking for success and
calling pg_fatal on all errors, so we might as well make the
API support that case directly with simpler errorhandling as
a result.
* open_func:
- zstd: move stream initialization from the open function to
the read and write functions as they can have fatal errors.
Also ensure to dup the file descriptor like none and gzip.
- lz4: Ensure to dup the file descriptor like none and gzip.
* close_func:
- zstd: Ensure to close the file descriptor even if closing
down the compressor fails, and clean up state allocation on
fclose failures. Make sure to capture errors set by fclose.
- lz4: Ensure to close the file descriptor even if closing
down the compressor fails, and instead of calling pg_fatal
log the failures using pg_log_error. Make sure to capture
errors set by fclose.
- none: Make sure to catch errors set by fclose.
* read_func / gets_func:
- Make read_func unconditionally return the number of read
bytes instead of making it optional per implementation.
- lz4: Make sure to call throw an error and not return -1
- gzip: gzread returning zero cannot be assumed to indicate
EOF as it is documented to return zero for some types of
errors.
- lz4, zstd: Convert the _read_internal helper functions to
not call pg_fatal on errors to be able to handle gets_func
returning NULL on error.
* getc_func:
- zstd: Use an unsigned char rather than an int to read char
into.
* LZ4Stream_init:
- Make sure to not switch to inited state until we know that
initialization succeeded and reset errno just in case.
On top of these changes there are minor comment cleanups and
improvements as well as an attempt to consistently reset errno
in codepaths where it is inspected.
This work was initiated by a report of API misuse, which turned
into a larger body of work. As this is an internal API these
changes can be backpatched into all affected branches.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Author: Daniel Gustafsson <daniel@yesql.se>
Reported-by: Evgeniy Gorbanev <gorbanyoves@basealt.ru>
Discussion: https://postgr.es/m/517794.1750082166@sss.pgh.pa.us
Backpatch-through: 16
Using the LWLockCounter requires first calculating its address in
shared memory like this:
LWLockCounter = (int *) ((char *) MainLWLockArray - sizeof(int));
Commit 82e861fbe1 started this trend in order to fix EXEC_BACKEND
builds, but it could also be fixed by adding it to the
BackendParameters struct. The current approach is somewhat
difficult to follow, so this commit switches to the latter. While
at it, swap around the code in LWLockShmemSize() to match the order
of assignments in CreateLWLocks() for added readability.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aLDLnan9gNCS9fHx%40nathan
Newer gcc versions will emit warnings about missing extern
declarations if certain header files are compiled by themselves.
Add the "extern" declarations needed to quiet that.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/1127775.1754417387@sss.pgh.pa.us
It's possible that if the only live partition is concurrently dropped
and try_table_open() fails, that the bms_del_member() will pfree the
live_parts Bitmapset. Since the bms_del_member() call does not assign
the result back to the live_parts local variable, the while loop could
segfault as that variable would still reference the pfree'd Bitmapset.
Backpatch to 15. 52f3de874 was backpatched to 14, but there's no
bms_del_member() there due to live_parts not yet existing in RelOptInfo in
that version. Technically there's no bug in version 15 as
bms_del_member() didn't pfree when the set became empty prior to
00b41463c (from v16). Applied to v15 anyway to keep the code similar and
to avoid the bad coding pattern.
Author: Bernd Reiß <bd_reiss@gmx.at>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://postgr.es/m/6b88f27a-c45c-4826-8e37-d61a04d90182@gmx.at
Backpatch-through: 15
I think the error message for a different condition was inadvertently
copied.
This problem seems to have been introduced by commit a4d75c86bf15.
Author: Álvaro Herrera <alvherre@kurilemu.de>
Reported-by: jian he <jian.universality@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 14
Discussion: https://postgr.es/m/CACJufxEZ48toGH0Em_6vdsT57Y3L8pLF=DZCQ_gCii6=C3MeXw@mail.gmail.com
Ignore src/include/port/win32/sys/resource.h. At least on macOS,
including this results in warnings and errors because of duplication
with system headers:
../src/include/port/win32/sys/resource.h:10:9: warning: 'RUSAGE_CHILDREN' redefined
../src/include/port/win32/sys/resource.h:16:1: error: redefinition of struct or union 'struct rusage'
Since we are also not checking similar system-replacement headers for
Windows, it makes sense to exclude this one, too.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/1127775.1754417387%40sss.pgh.pa.us
Otherwise, headerscheck will fail if the ICU headers are in a location
not reached by the normal CFLAGS/CPPFLAGS:
../src/include/utils/pg_locale.h:21:10: fatal error: unicode/ucol.h: No such file or directory
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/1127775.1754417387%40sss.pgh.pa.us
BufferGetPage() already returns type Page, so casting it to Page
doesn't achieve anything. A sizable number of call sites does this
casting; remove that.
This was already done inconsistently in the code in the first import
in 1996 (but didn't exist in the pre-1995 code), and it was then
apparently just copied around.
Author: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://www.postgresql.org/message-id/flat/CALdSSPgFhc5=vLqHdk-zCcnztC0zEY3EU_Q6a9vPEaw7FkE9Vw@mail.gmail.com
For a child relation, we should not assume that its parent's
unique-ified relation (or unique-ified path in v18) always exists. In
cases where all RHS columns that need to be unique-ified are equated
to constants, the unique-ified relation/path for the parent table is
not built, as there are no columns left to unique-ify. Failing to
account for this can result in a SIGSEGV crash during planning.
This patch checks whether the parent's unique-ified relation or path
exists and skips unique-ification of the child relation if it does
not.
Author: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAMbWs49MOdLW2c+qbLHHBt8VBu=4ONpM91D19=AWeW93eFUF6A@mail.gmail.com
Backpatch-through: 18
Previously, we used LW_EXCLUSIVE in several places despite only reading
WalSummarizerCtl fields. This patch reduces the lock level to LW_SHARED
where we are only reading the shared fields.
Backpatch to 17, where wal summarization was introduced.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/CAD21AoDdKhf_9oriEYxY-JCdF+Oe_muhca3pcdkMEdBMzyHyKw@mail.gmail.com
Backpatch-through: 17
One mechanism we have for implementing semi-joins is to de-duplicate
the output of the RHS and then treat the join as a plain inner join.
Initial construction of the join's SpecialJoinInfo identifies the
RHS columns that need to be de-duplicated, but later we may find that
some of those don't need to be handled explicitly, either because
they're known to be constant or because they are redundant with some
previous column.
Up to now, while sort-based de-duplication handled such cases well,
hash-based de-duplication didn't: we'd still hash on all of the
originally-identified columns. This is probably not a very big
deal performance-wise, but in the wake of commit a3179ab69 it can
cause planner errors. That happens when join elimination causes
recalculation of variables' attr_needed bitmapsets, and we decide
that a variable mentioned in a semijoin clause doesn't need to be
propagated up to the join level anymore.
There are a number of ways we could slice the blame for this, but the
only fix that doesn't result in pessimizing plans for loosely-related
cases is to be more careful about not hashing columns we don't
actually need to de-duplicate. We can install that consideration
into create_unique_paths in master, or the predecessor code in
create_unique_path in v18, without much refactoring.
(As follow-up work, it might be a good idea to look at more-invasive
refactoring, in hopes of preventing other bugs in this area. But
with v18 release so close, there's not time for that now, nor would
we be likely to want to put such refactoring into v18 anyway.)
Reported-by: Sergey Soloviev <sergey.soloviev@tantorlabs.ru>
Diagnosed-by: Richard Guo <guofenglinux@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/1fd1a421-4609-4d46-a1af-ab74d5de504a@tantorlabs.ru
Backpatch-through: 18
This has been done historically because of get_database_name (which
since commit cb98e6fb8fd4 belongs in lsyscache.c/h, so let's move it
there) and get_database_oid (which is in the right place, but whose
declaration should appear in pg_database.h rather than dbcommands.h).
Clean this up.
Also, xlogreader.h and stringinfo.h are no longer needed by dbcommands.h
since commit f1fd515b393a, so remove them.
Author: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/202508191031.5ipojyuaswzt@alvherre.pgsql
During an investigation into rather odd aio related errors on macos, observed
by Alexander and Konstantin, we started to wonder if bitfield access is
related to the error. At the moment it looks like it is related, we cannot
reproduce the failures when replacing the bitfields. In addition, the problem
can only be reproduced with some compiler [versions] and not everyone has been
able to reproduce the issue.
The observed problem is that, very rarely, PgAioHandle->{state,target} are in
an inconsistent state, after having been checked to be in a valid state not
long before, triggering an assertion failure. Unfortunately, this could be
caused by wrong compiler code generation or somehow of missing memory barriers
- we don't really know. In theory there should not be any concurrent write
access to the handle in the state the bug is triggered, as the handle was idle
and is just being initialized.
Separately from the bug, we observed that at least gcc and clang generate
rather terrible code for the bitfield access. Even if it's not clear if the
observed assertion failure is actually caused by the bitfield somehow, the bad
code generation alone is sufficient reason to stop using bitfields.
Therefore, replace the enum bitfields with uint8s and instead cast in each
switch statement.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reported-by: Konstantin Knizhnik <knizhnik@garret.ru>
Discussion: https://postgr.es/m/1500090.1745443021@sss.pgh.pa.us
Backpatch-through: 18
Commit d31bbfb6590 lost some test coverage, because the situation
being tested, a concurrent DROP, cannot happen anymore. Put the test
coverage back with a bit of a trick, by deleting directly from the
catalog table.
Co-authored-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://www.postgresql.org/message-id/flat/bf72b82c-124d-4efa-a484-bb928e9494e4@eisentraut.org
Commit d31bbfb6590 removed the comment at objectNamesToOids() that
there is no locking, because that commit added locking. But to fix
all the problems, we'd still need a stronger lock. So put the comment
back with more a detailed explanation.
Co-authored-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://www.postgresql.org/message-id/flat/bf72b82c-124d-4efa-a484-bb928e9494e4@eisentraut.org
This patch fixes a bug in how 'load_external_function' handles
'$libdir/ prefixes in module paths.
Previously, 'load_external_function' would unconditionally strip
'$libdir/' from the beginning of the 'filename' string. This caused
an issue when the path was nested, such as "$libdir/nested/my_lib".
Stripping the prefix resulted in a path of "nested/my_lib", which
would fail to be found by the expand_dynamic_library_name function
because the original '$libdir' macro was removed.
To fix this, the code now checks for the presence of an additional
directory separator ('/' or '\') after the '$libdir/' prefix. The
prefix is only stripped if the remaining string does not contain a
separator. This ensures that simple filenames like '"$libdir/my_lib"'
are correctly handled, while nested paths are left intact for
'expand_dynamic_library_name' to process correctly.
Reported-by: Dilip Kumar <dilipbalaut@gmail.com>
Co-authored-by: Matheus Alcantara <matheusssilv97@gmail.com>
Co-authored-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Srinath Reddy Sadipiralla <srinath2133@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAFiTN-uKNzAro4tVwtJhF1UqcygfJ%2BR%2BRL%3Db-_ZMYE3LdHoGhA%40mail.gmail.com
When checking for expression indexes that may be affected by a Unicode
update during upgrade, check for a few more functions. Specifically,
check for documented regexp functions, as well as the new CASEFOLD()
function.
Also, fully-qualify references to pg_catalog.text and
pg_catalog.regtype.
Discussion: https://postgr.es/m/399b656a3abb0c9283538a040f72199c0601525c.camel@j-davis.com
Backpatch-through: 18
Followup to 4e1e41733 and 52ecd05ae. oauth-utils.c uses
pthread_sigmask(), requiring -pthread on Debian bullseye at minimum.
Reported-by: Christoph Berg <myon@debian.org>
Tested-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/aK4PZgC0wuwQ5xSK%40msg.df7cb.de
Backpatch-through: 18
When vacuumdb's --missing-stats-only option is used, the catalog
query for retrieving the list of relations to process must read
pg_statistic and pg_statistic_ext_data. However, those catalogs
can only be read by superusers by default, so --missing-stats-only
is effectively superuser-only. This is unfortunate, but since the
option is primarily intended for use by administrators after
running pg_upgrade, let's just live with it for v18. This commit
adds a note about the aforementioned privilege requirements to the
documentation for --missing-stats-only.
We first tried to improve matters by modifying the query to read
the pg_stats and pg_stats_ext system views instead. While that is
indeed more lenient from a privilege standpoint, it is also
borderline incomprehensible. pg_stats shows rows for which the
user has the SELECT privilege on the corresponding column, and
pg_stats_ext shows rows for tables the user owns. Meanwhile,
ANALYZE requires either MAINTAIN on the table or, for non-shared
relations, ownership of the database. But even if the privilege
discrepancies were tolerable, the performance impact was not.
Ultimately, the modified query was substantially more expensive, so
we abandoned the idea.
For v19, perhaps we could introduce a simple, inexpensive way to
discover which relations are missing statistics, such as a system
function or view with similar privilege requirements to ANALYZE.
Unfortunately, it is far too late for anything like that in v18.
Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAHGQGwHh43suEfss1wvBsk7vqiou%3DUY0zcy8HGyE5hBp%2BHZ7SQ%40mail.gmail.com
Backpatch-through: 18
Commit 4b754d6c1 introduced the concept of an excludeOnly scan key,
which cannot select matching index entries but can reject
non-matching tuples, for example a tsquery such as '!term'. There are
poorly-documented assumptions that such scan keys do not appear as the
first scan key. ginNewScanKey did nothing to ensure that, however,
with the result that certain GIN index searches could go into an
infinite loop while apparently-equivalent queries with the clauses in
a different order were fine.
Fix by teaching ginNewScanKey to place all excludeOnly scan keys
after all not-excludeOnly ones. So far as we know at present,
it might be sufficient to avoid the case where the very first
scan key is excludeOnly; but I'm not very convinced that there
aren't other dependencies on the ordering.
Bug: #19031
Reported-by: Tim Wood <washwithcare@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19031-0638148643d25548@postgresql.org
Backpatch-through: 13
The CHECK_FOR_INTERRUPTS call in gingetbitmap turns out to be
inadequate to prevent a long uninterruptible loop, because
we now know a case where looping occurs within scanGetItem.
While the next patch will fix the bug that caused that, it
seems foolish to assume that no similar patterns are possible.
Let's do the CFI within scanGetItem's retry loop, instead.
This demonstrably allows canceling out of the loop exhibited
in bug #19031.
Bug: #19031
Reported-by: Tim Wood <washwithcare@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19031-0638148643d25548@postgresql.org
Backpatch-through: 13
The Self-Join Elimination SJE feature messes up keeping and removing RowMark's
in remove_self_joins_one_group(). That didn't lead to user-level error,
because the planned RowMark is only used to reference a rtable entry in later
execution stages. An RTE entry for keeping and removing relations is
identical and refers to the same relation OID.
To reduce confusion and prevent future issues, this commit cleans up the code
and fixes the incorrect behaviour. Furthermore, it includes sanity checks in
setrefs.c on existing non-null RTE and RelOptInfo entries for each RowMark.
Discussion: https://postgr.es/m/18c6bd6c-6d2a-419a-b0da-dfedef34b585%40gmail.com
Author: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Backpatch-through: 18
Rename inner and outer to rrel and krel, respectively, to highlight their
connection to r and k indexes. For the same reason, rename imark and omark
to rmark and kmark.
Discussion: https://postgr.es/m/18c6bd6c-6d2a-419a-b0da-dfedef34b585%40gmail.com
Author: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Backpatch-through: 18
This is a follow-up for commit c2c2c7e225. It further clarifies the
following in the initcap function documentation:
* Document that title case is used for digraphs in specific locales,
* Reference particular ICU function used,
* Add note about the purpose of the function.
Discussion: https://postgr.es/m/804cc10ef95d4d3b298e76b181fd9437%40postgrespro.ru
Author: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Co-authored-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
This changes configure and meson.build to require at least C11,
instead of the previous C99. The installation documentation is
updated accordingly.
configure.ac previously used AC_PROG_CC_C99 to activate C99. But
there is no AC_PROG_CC_C11 in Autoconf 2.69, because it's too
old. (Also, post-2.69, the AC_PROG_CC_Cnn macros were deprecated and
AC_PROG_CC activates the last supported C mode.) We could update the
required Autoconf version, but that might be a separate project that
no one wants to undertake at the moment. Instead, we open-code the
test for C11 using some inspiration from later Autoconf versions. But
instead of writing an elaborate test program, we keep it simple and
just check __STDC_VERSION__, which should be good enough in practice.
In meson.build, we update the existing C99 test to C11, but again we
just check for __STDC_VERSION__.
This also removes the separate option for the conforming preprocessor
on MSVC, added by commit 8fd9bb1d965, since that is activated
automatically in C11 mode.
Note, we don't use the "official" way to set the C standard in Meson
using the c_std project option, because that is impossible to use
correctly (see <https://github.com/mesonbuild/meson/issues/14717>).
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/01a69441-af54-4822-891b-ca28e05b215a@eisentraut.org
To better record the internal behaviors of oauth-curl.c, add a unit test
suite for the socket and timer handling code. This is all based on TAP
and driven by our existing Test::More infrastructure.
This commit is a replay of 1443b6c0e, which was reverted due to
buildfarm failures. Compared with that, this version protects the build
targets in the Makefile with a with_libcurl conditional, and it tweaks
the code style in 001_oauth.pl.
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Discussion: https://postgr.es/m/CAOYmi+nDZxJHaWj9_jRSyf8uMToCADAmOfJEggsKW-kY7aUwHA@mail.gmail.com
Discussion: https://postgr.es/m/CAOYmi+m=xY0P_uAzAP_884uF-GhQ3wrineGwc9AEnb6fYxVqVQ@mail.gmail.com
libpq-oauth uses floor() but did not link against libm. Since libpq
itself uses -lm, nothing in the buildfarm has had problems with
libpq-oauth yet, and it seems difficult to hit a failure in practice.
But commit 1443b6c0e attempted to add an executable based on
libpq-oauth, which ran into link-time failures with Clang due to this
omission. It seems prudent to fix this for both the module and the
executable simultaneously so that no one trips over it in the future.
This is a Makefile-only change. The Meson side already pulls in libm,
through the os_deps dependency.
Discussion: https://postgr.es/m/CAOYmi%2Bn6ORcmV10k%2BdAs%2Bp0b9QJ4bfpk0WuHQaF5ODXxM8Y36A%40mail.gmail.com
Backpatch-through: 18
v17 introduced the MAINTAIN ON TABLES privilege. That changed the
applicable "baseacls" reaching buildACLCommands(). That yielded
spurious TestUpgradeXversion diffs. Change to use a TYPES privilege.
Types have the same one privilege in all supported versions, so they
avoid the problem. Per buildfarm. Back-patch to v13, like that commit.
Discussion: https://postgr.es/m/20250823144505.88.nmisch@google.com
Backpatch-through: 13
Commit 0decd5e89db9f5edb9b27351082f0d74aae7a9b6 missed DO_DEFAULT_ACL,
leading to assertion failures, potential dump order instability, and
spurious schema diffs. Back-patch to v13, like that commit.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://postgr.es/m/d32aaa8d-df7c-4f94-bcb3-4c85f02bea21@gmail.com
Backpatch-through: 13
This reverts commit bc22dc0e0ddc2dcb6043a732415019cc6b6bf683.
It appears that conditional variables are not suitable for use inside
critical sections. If WaitLatch()/WaitEventSetWaitBlock() face postmaster
death, they exit, releasing all locks instead of PANIC. In certain
situations, this leads to data corruption.
Reported-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/B3C69B86-7F82-4111-B97F-0005497BB745%40yandex-team.ru
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Aleksander Alekseev <aleksander@tigerdata.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Yura Sokolov <y.sokolov@postgrespro.ru>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Backpatch-through: 18
Statistics aren't created for virtual generated columns, so
"vacuumdb --missing-stats-only" always chooses to analyze tables
that have them. To fix, modify vacuumdb's query for retrieving
relations that are missing statistics to exclude those columns.
Oversight in commit edba754f05.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://postgr.es/m/20250820104226.8ba51e43164cd590b863ce41%40sraoss.co.jp
Backpatch-through: 18
The protocol documentation states that the maximum length of a cancel
key is 256 bytes. This starts checking for that limit in libpq.
Otherwise third party backend implementations will probably start
using more bytes anyway. We also start requiring that a protocol 3.0
connection does not send a longer cancel key, to make sure that
servers don't start breaking old 3.0-only clients by accident. Finally
this also restricts the minimum key length to 4 bytes (both in the
protocol spec and in the libpq implementation).
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Jacob Champion <jchampion@postgresql.org>
Discussion: https://www.postgresql.org/message-id/df892f9f-5923-4046-9d6f-8c48d8980b50@iki.fi
Backpatch-through: 18
In most cases, if an out-of-memory situation happens, we attach the
error message to the connection and report it at the next
PQgetResult() call. However, there are a few cases, while processing
messages that are not associated with any particular query, where we
handled failed allocations differently and not very nicely:
- If we ran out of memory while processing an async notification,
getNotify() either returned EOF, which stopped processing any
further data until more data was received from the server, or
silently dropped the notification. Returning EOF is problematic
because if more data never arrives, e.g. because the connection was
used just to wait for the notification, or because the next
ReadyForQuery was already received and buffered, it would get stuck
forever. Silently dropping a notification is not nice either.
- (New in v18) If we ran out of memory while receiving BackendKeyData
message, getBackendKeyData() returned EOF, which has the same issues
as in getNotify().
- If we ran out of memory while saving a received a ParameterStatus
message, we just skipped it. A later call to PQparameterStatus()
would return NULL, even though the server did send the status.
Change all those cases to terminate the connnection instead. Our
options for reporting those errors are limited, but it seems better to
terminate than try to soldier on. Applications should handle
connection loss gracefully, whereas silently missing a notification,
parameter status, or cancellation key could cause much weirder
problems.
This also changes the error message on OOM while expanding the input
buffer. It used to report "cannot allocate memory for input buffer",
followed by "lost synchronization with server: got message type ...".
The "lost synchronization" message seems unnecessary, so remove that
and report only "cannot allocate memory for input buffer". (The
comment speculated that the out of memory could indeed be caused by
loss of sync, but that seems highly unlikely.)
This evolved from a more narrow patch by Jelte Fennema-Nio, which was
reviewed by Jacob Champion.
Somewhat arbitrarily, backpatch to v18 but no further. These are
long-standing issues, but we haven't received any complaints from the
field. We can backpatch more later, if needed.
Co-authored-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Jacob Champion <jchampion@postgresql.org>
Discussion: https://www.postgresql.org/message-id/df892f9f-5923-4046-9d6f-8c48d8980b50@iki.fi
Backpatch-through: 18
Commit 1585ff7387 changed GetTransactionSnapshot() to throw an error
if it's called during logical decoding, instead of returning the
historic snapshot. I made that change for extra protection, because a
historic snapshot can only be used to access catalog tables while
GetTransactionSnapshot() is usually called when you're executing
arbitrary queries. You might get very subtle visibility problems if
you tried to use the historic snapshot for arbitrary queries.
There's no built-in code in PostgreSQL that calls
GetTransactionSnapshot() during logical decoding, but it turns out
that the pglogical extension does just that, to evaluate row filter
expressions. You would get weird results if the row filter runs
arbitrary queries, but it is sane as long as you don't access any
non-catalog tables. Even though there are no checks to enforce that in
pglogical, a typical row filter expression does not access any tables
and works fine. Accessing tables marked with the user_catalog_table =
true option is also OK.
To fix pglogical with row filters, and any other extensions that might
do similar things, revert GetTransactionSnapshot() to return a
historic snapshot during logical decoding.
To try to still catch the unsafe usage of historic snapshots, add
checks in heap_beginscan() and index_beginscan() to complain if you
try to use a historic snapshot to scan a non-catalog table. We're very
close to the version 18 release however, so add those new checks only
in master.
Backpatch-through: 18
Reported-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://www.postgresql.org/message-id/20250809222338.cc.nmisch@google.com
Reduce from ShareLock to ShareUpdateExclusivelock. Validation during
ALTER DOMAIN ... ADD CONSTRAINT keeps using ShareLock.
Example:
create domain d1 as int;
create table t (a d1);
alter domain d1 add constraint cc10 check (value > 10) not valid;
begin;
alter domain d1 validate constraint cc10;
-- another session
insert into t values (8);
Now we should still be able to perform DML operations on table t while
the domain constraint is being validated. The equivalent works
already on table constraints.
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CACJufxHz92A88NLRTA2msgE2dpXpE-EoZ2QO61od76-6bfqurA%40mail.gmail.com
This code was relying on "long", which is signed 8 bytes everywhere
except on Windows where it is 4 bytes, that could potentially expose it
to overflows, even if the current uses in the code are fine as far as I
know. This code is now able to rely on the same sizeof() variable
everywhere, with int64. long was used for sizes, partition counts and
entry counts.
Some callers of the dynahash.c routines used long declarations, that can
be cleaned up to use int64 instead. There was one shortcut based on
SIZEOF_LONG, that can be removed. long is entirely removed from
dynahash.c and hsearch.h.
Similar work was done in b1e5c9fa9ac4.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Discussion: https://postgr.es/m/aKQYp-bKTRtRauZ6@paquier.xyz
Temporary relations may share the same RelFileNumber with a permanent
relation, or other temporary relations associated with other sessions.
Being able to uniquely identify a temporary relation would require
RelidByRelfilenumber() to know about the proc number of the temporary
relation it wants to identify, something it is not designed for since
its introduction in f01d1ae3a104.
There are currently three callers of RelidByRelfilenumber():
- autoprewarm.
- Logical decoding, reorder buffer.
- pg_filenode_relation(), that attempts to find a relation OID based on
a tablespace OID and a RelFileNumber.
This makes the situation problematic particularly for the first two
cases, leading to the possibility of random ERRORs due to
inconsistencies that temporary relations can create in the cache
maintained by RelidByRelfilenumber(). The third case should be less of
an issue, as I suspect that there are few direct callers of
pg_filenode_relation().
The window where the ERRORs are happen is very narrow, requiring an OID
wraparound to create a lookup conflict in RelidByRelfilenumber() with a
temporary table reusing the same OID as another relation already cached.
The problem is easier to reach in workloads with a high OID consumption
rate, especially with a higher number of temporary relations created.
We could get pg_filenode_relation() and RelidByRelfilenumber() to work
with temporary relations if provided the means to identify them with an
optional proc number given in input, but the years have also shown that
we do not have a use case for it, yet. Note that this could not be
backpatched if pg_filenode_relation() needs changes. It is simpler to
ignore temporary relations.
Reported-by: Shenhao Wang <wangsh.fnst@fujitsu.com>
Author: Vignesh C <vignesh21@gmail.com>
Reviewed-By: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-By: Robert Haas <robertmhaas@gmail.com>
Reviewed-By: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-By: Takamichi Osumi <osumi.takamichi@fujitsu.com>
Reviewed-By: Michael Paquier <michael@paquier.xyz>
Reviewed-By: Masahiko Sawada <sawada.mshk@gmail.com>
Reported-By: Shenhao Wang <wangsh.fnst@fujitsu.com>
Discussion: https://postgr.es/m/bbaaf9f9-ebb2-645f-54bb-34d6efc7ac42@fujitsu.com
Backpatch-through: 13
The result of pgaio_io_get_id() was being assigned to a mix of int and
uint32 variables. This fixes it to use int consistently, which seems
the most correct. Also change the queue empty special value in
method_worker.c to -1 from UINT32_MAX.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/70c784b3-f60b-4652-b8a6-75e5f051243e%40eisentraut.org
Replication slot synchronization (sync_replication_slots = on)
requires wal_level to be logical. This commit prevents the server
from starting if sync_replication_slots is enabled but wal_level
is set to minimal or replica.
Failing early during startup helps users catch invalid configurations
immediately, which is important because changing wal_level requires
a server restart.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Shveta Malik <shveta.malik@gmail.com>
Discussion: https://postgr.es/m/CAH0PTU_pc3oHi__XESF9ZigCyzai1Mo3LsOdFyQA4aUDkm01RA@mail.gmail.com
This is similar to 19c6e92b13b2, in order to keep the style used in the
scripts consistent for the option names and values used in commands.
The places updated in this commit have been added recently in
71ea0d679543.
These changes are cosmetic; there is no need for a backpatch.
The description of this GUC provides a list of the situations where
full-page writes are generated. However, it is not completely exact,
mentioning only the cases where full_page_writes=on or base backups. It
is possible to generate full-page writes in more situations than these
two, making the description confusing as it implies that no other cases
exist.
The description is slightly reworded to take into account that other
cases are possible, without mentioning them directly to minimize the
maintenance burden should FPWs be generated in more contexts in the
future.
Author: Jingtang Zhang <mrdrivingduck@gmail.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>
Discussion: https://postgr.es/m/CAPsk3_CtAYa_fy4p6=x7qtoutrdKvg1kGk46D5fsE=sMt2546g@mail.gmail.com
Backpatch-through: 13
If we error out during execution of a SQL-language function, we will
often leave behind non-null pointers in its SQLFunctionCache's cplan
and eslist fields. This is problematic if the SQLFunctionCache is
re-used, because those pointers will point at resources that were
released during error cleanup. This problem escaped detection so far
because ordinarily we won't re-use an FmgrInfo+SQLFunctionCache struct
after a query error. However, in the rather improbable case that
someone implements an opclass support function in SQL language, there
will be long-lived FmgrInfos for it in the relcache, and then the
problem is reachable after the function throws an error.
To fix, add a flag to SQLFunctionCache that tracks whether execution
escapes out of fmgr_sql, and clear out the relevant fields during
init_sql_fcache if so. (This is going to need more thought if we ever
try to share FMgrInfos across threads; but it's very far from being
the only problem such a project will encounter, since many functions
regard fn_extra as being query-local state.)
This broke at commit 0313c5dc6; before that we did not try to re-use
SQLFunctionCache state across calls. Hence, back-patch to v18.
Bug: #19026
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/19026-90aed5e71d0c8af3@postgresql.org
Backpatch-through: 18
In refuseDupeIndexAttach(), change from
errdetail("Another index is already attached for partition \"%s\"."...)
to
errdetail("Another index \"%s\" is already attached for partition \"%s\"."...)
so we can easily understand which index is already attached for
partition \"%s\".
Author: Jian He <jian.universality@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://www.postgresql.org/message-id/flat/CACJufxGBfykJ_1ztk9T%2BL_gLmkOSOF%2BmL9Mn4ZPydz-rh%3DLccQ%40mail.gmail.com
Some replication slot manipulations (logical decoding via SQL,
advancing) were failing an assertion when releasing a slot in
single-user mode, because active_pid was not set in a ReplicationSlot
when its slot is acquired.
ReplicationSlotAcquire() has some logic to be able to work with the
single-user mode. This commit sets ReplicationSlot->active_pid to
MyProcPid, to let the slot-related logic fall-through, considering the
single process as the one holding the slot.
Some TAP tests are added for various replication slot functions with the
single-user mode, while on it, for slot creation, drop, advancing, copy
and logical decoding with multiple slot types (temporary, physical vs
logical). These tests are skipped on Windows, as direct calls of
postgres --single would fail on permission failures. There is no
platform-specific behavior that needs to be checked, so living with this
restriction should be fine. The CI is OK with that, now let's see what
the buildfarm tells.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Mutaamba Maasha <maasha@gmail.com>
Discussion: https://postgr.es/m/OSCPR01MB14966ED588A0328DAEBE8CB25F5FA2@OSCPR01MB14966.jpnprd01.prod.outlook.com
Backpatch-through: 13
vacuumdb should follow the behavior of the underlying VACUUM and ANALYZE
commands. When --analyze-only is used, it ought to analyze regular tables,
materialized views, and partitioned tables, just as ANALYZE (with no explicit
target tables) does. Otherwise, it should only process regular tables and
materialized views, since VACUUM skips partitioned tables when no targets
are given.
Previously, vacuumdb --analyze-only skipped partitioned tables. This was
inconsistent, and also inconvenient after pg_upgrade, where --analyze-only
is typically used to gather missing statistics.
This commit fixes the behavior so that vacuumdb --analyze-only also processes
partitioned tables. As a result, both vacuumdb --analyze-only and
ANALYZE (with no explicit targets) now analyze regular tables,
partitioned tables, and materialized views, but not foreign tables.
Because this is a nontrivial behavior change, it is applied only to master.
Reported-by: Zechman, Derek S <Derek.S.Zechman@snapon.com>
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Co-authored-by: Mircea Cadariu <cadariu.mircea@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CO1PR04MB8281387B9AD9DE30976966BBC045A%40CO1PR04MB8281.namprd04.prod.outlook.com
This comment mentions that pg_buffercache locks all buffer
partitions simultaneously, but it hasn't done so since v10.
Oversight in commit 6e654546fb.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/aKTuAHVEuYCUmmIy%40nathan
This commit adds CHECK_FOR_INTERRUPTS to loops iterating over shared
buffers in several pg_buffercache functions, allowing them to be
interrupted during long-running operations.
Backpatch to all supported versions. Add CHECK_FOR_INTERRUPTS to the
loop in pg_buffercache_pages() in all supported branches, and to
pg_buffercache_summary() and pg_buffercache_usage_counts() in version
16 and newer.
Author: SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>
Discussion: https://postgr.es/m/CAHg+QDcejeLx7WunFT3DX6XKh1KshvGKa8F5au8xVhqVvvQPRw@mail.gmail.com
Backpatch-through: 13
This commit updates the pgoutput documentation with the following changes:
- Specify the data type for each pgoutput option.
- Clarify the relationship between proto_version and options such as
streaming and two_phase.
- Add a note on the use of pg_logical_slot_peek_changes and
pg_logical_slot_get_changes with pgoutput.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Discussion: https://postgr.es/m/CAHGQGwFJTbygdhhjR_iP4Oem=Lo1xsptWWOq825uoW+hG_Lfnw@mail.gmail.com
Previously, the documentation for pgoutput was located in the section on
the logical streaming replication protocol, and there was no index entry
for it. As a result, users had difficulty finding information about pgoutput.
This commit moves the pgoutput documentation under the logical decoding
section and adds an index entry, making it easier for users to locate and
access this information.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Chao Li <li.evan.chao@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Discussion: https://postgr.es/m/CAHGQGwFJTbygdhhjR_iP4Oem=Lo1xsptWWOq825uoW+hG_Lfnw@mail.gmail.com
This just includes a link to the bki documentation, to help people get
started.
Before commit 372728b0d49, there was a README at
src/backend/catalog/README, but then this was moved to the SGML
documentation. So this effectively puts back a link to what was
moved. But src/include/catalog/ is probably a better location,
because that's where all the interesting files are.
Co-authored-by: Florents Tselai <florents.tselai@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+v5N400GJFJ9RyXAX7hFKbtF7vVQGvWdFWEfcSQmvVhi9xfrA@mail.gmail.com
The DROP SUBSCRIPTION command performs several operations: it stops the
subscription workers, removes subscription-related entries from system
catalogs, and deletes the replication slot on the publisher server.
Previously, this command acquired an AccessExclusiveLock on
pg_subscription before initiating these steps.
However, while holding this lock, the command attempts to connect to the
publisher to remove the replication slot. In cases where the connection is
made to a newly created database on the same server as subscriber, the
cache-building process during connection tries to acquire an
AccessShareLock on pg_subscription, resulting in a self-deadlock.
To resolve this issue, we reduce the lock level on pg_subscription during
DROP SUBSCRIPTION from AccessExclusiveLock to RowExclusiveLock. Earlier,
the higher lock level was used to prevent the launcher from starting a new
worker during the drop operation, as a restarted worker could become
orphaned.
Now, instead of relying on a strict lock, we acquire an AccessShareLock on
the specific subscription being dropped and re-validate its existence
after acquiring the lock. If the subscription is no longer valid, the
worker exits gracefully. This approach avoids the deadlock while still
ensuring that orphan workers are not created.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 13
Discussion: https://postgr.es/m/18988-7312c868be2d467f@postgresql.org
This provides a single entry point to access some information about the
state of MultiXacts, able to return some data about multixacts offsets
and counts. Originally this function was only able to return some
information about the number of multixacts and multixact members,
extended here to provide some data about the oldest multixact ID in use
and the oldest offset, if known.
This change has been proposed in a patch that aims at providing more
monitoring capabilities for multixacts, and it is useful on its own.
GetMultiXactInfo() is added to multixact.h, becoming available for
out-of-core code.
Extracted from a larger patch by the same author.
Author: Naga Appani <nagnrik@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CA+QeY+AAsYK6WvBW4qYzHz4bahHycDAY_q5ECmHkEV_eB9ckzg@mail.gmail.com
This pointer was not used after its last update. This variable
assignment was most likely a vestige artifact of the earlier versions of
the patch set that have led to 5891c7a8ed8f.
This pointer update is useless, so let's remove it. It removes one call
to pgstat_dsa_init_size(), making the code slightly easier to grasp.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aKLsu2sdpnyeuSSc@ip-10-97-1-34.eu-west-3.compute.internal
Now that the only call to relation_has_unique_index_for() that
supplied an exprlist and oprlist has been removed, the loop handling
those lists is effectively dead code. This patch removes that loop
and simplifies the function accordingly.
Author: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAMbWs4-EBnaRvEs7frTLbsXiweSTUXifsteF-d3rvv01FKO86w@mail.gmail.com
There are two implementation techniques for semijoins: one uses the
JOIN_SEMI jointype, where the executor emits at most one matching row
per left-hand side (LHS) row; the other unique-ifies the right-hand
side (RHS) and then performs a plain inner join.
The latter technique currently has some drawbacks related to the
unique-ification step.
* Only the cheapest-total path of the RHS is considered during
unique-ification. This may cause us to miss some optimization
opportunities; for example, a path with a better sort order might be
overlooked simply because it is not the cheapest in total cost. Such
a path could help avoid a sort at a higher level, potentially
resulting in a cheaper overall plan.
* We currently rely on heuristics to choose between hash-based and
sort-based unique-ification. A better approach would be to generate
paths for both methods and allow add_path() to decide which one is
preferable, consistent with how path selection is handled elsewhere in
the planner.
* In the sort-based implementation, we currently pay no attention to
the pathkeys of the input subpath or the resulting output. This can
result in redundant sort nodes being added to the final plan.
This patch improves semijoin planning by creating a new RelOptInfo for
the RHS rel to represent its unique-ified version. It then generates
multiple paths that represent elimination of distinct rows from the
RHS, considering both a hash-based implementation using the cheapest
total path of the original RHS rel, and sort-based implementations
that either exploit presorted input paths or explicitly sort the
cheapest total path. All resulting paths compete in add_path(), and
those deemed worthy of consideration are added to the new RelOptInfo.
Finally, the unique-ified rel is joined with the other side of the
semijoin using a plain inner join.
As a side effect, most of the code related to the JOIN_UNIQUE_OUTER
and JOIN_UNIQUE_INNER jointypes -- used to indicate that the LHS or
RHS path should be made unique -- has been removed. Besides, the
T_Unique path now has the same meaning for both semijoins and upper
DISTINCT clauses: it represents adjacent-duplicate removal on
presorted input. This patch unifies their handling by sharing the
same data structures and functions.
This patch also removes the UNIQUE_PATH_NOOP related code along the
way, as it is dead code -- if the RHS rel is provably unique, the
semijoin should have already been simplified to a plain inner join by
analyzejoins.c.
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Alexandra Wang <alexandra.wang.oss@gmail.com>
Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com>
Discussion: https://postgr.es/m/CAMbWs4-EBnaRvEs7frTLbsXiweSTUXifsteF-d3rvv01FKO86w@mail.gmail.com
This test was the only one named following the convention used for
alternate output files. This was a little bit confusing when looking at
the diffs of the test, because one would think that the diffs are based
on an uncommon case, as alternate outputs are usually used for uncommon
configuration scenarios.
create_sequence_1 was the only test in the tree using such a name, and
it had no alternate output.
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://postgr.es/m/aKLY6wCa_OInr3kY@paquier.xyz
Two header declarations were related to SQL-callable functions, that
should have been cleaned up in df9133fa6384. Some more includes can be
removed on closer inspection, so let's clean up these as well, while on
it.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/345438.1755524834@sss.pgh.pa.us
This existed in a semi broken stated from be0a66666 until 296cba276.
Recent discussion has questioned the value of having this at all as it
only outputs static information from various of the hash table's
properties when the hash table is created.
Author: Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/OSCPR01MB1496650D03FA0293AB9C21416F534A@OSCPR01MB14966.jpnprd01.prod.outlook.com
Previously this was being output to stderr. This commit adjusts things
to use elog(DEBUG4). Here we also adjust the format of the message to
add the hash table name and also put the message on a single line. This
should make grepping the logs for this information easier.
Also get rid of the global hash table statistics. This seems very dated
and didn't fit very well with trying to put all the statistics for a
specific hash table on a single log line.
The main aim here is to allow it so we can have at least one buildfarm
member build with HASH_STATISTICS to help prevent future changes from
breaking things in that area. ca3891251 recently fixed some issues
here.
In passing, switch to using uint64 data types rather than longs for the
usage counters. The long type is 32 bits on some platforms we support.
Author: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAApHDvoccvJ9CG5zx+i-EyCzJbcL5K=CzqrnL_YN59qaL5hiaw@mail.gmail.com
Apparently the only Test::More function this script uses is
BAIL_OUT, so this omission just results in the wrong error
output appearing in the cases where it bails out.
Seems to have been an oversight in commit 9f899562d which
split Kerberos.pm out of another script.
Author: Maxim Orlov <orlovmg@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CACG=ezY1Dp-S94b78nN0ZuaBGGcMUB6_nF-VyYUwPt1ArFqmGA@mail.gmail.com
Backpatch-through: 17
Input with zero length can result in a buffer underflow when
accessing *(num + (len - 1)), as (len - 1) would produce a negative
index. Add an assertion for zero-length input to prevent it.
This was found by ALT Linux Team.
Reviewing the call sites shows that get_th() currently cannot be
applied to an empty string: it is always called on a string containing
a number we've just printed. Therefore, an assertion rather than a
user-facing error message is sufficient.
Co-authored-by: Alexander Kuznetsov <kuznetsovam@altlinux.org>
Discussion: https://www.postgresql.org/message-id/flat/e22df993-cdb4-4d0a-b629-42211ebed582@altlinux.org
A patch is under discussion to add more SQL capabilities related to
multixacts, and this move avoids bloating the file more than necessary.
This affects pg_get_multixact_members(). A side effect of this move is
the requirement to add mxstatus_to_string() to multixact.h.
Extracted from a larger patch by the same author, tweaked by me.
Author: Naga Appani <nagnrik@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/CA+QeY+AAsYK6WvBW4qYzHz4bahHycDAY_q5ECmHkEV_eB9ckzg@mail.gmail.com
Move the test for compiler options for C99 earlier in meson.build,
before we make use of the compiler for other tests. That way, if any
command-line options are needed, subsequent tests will also use them.
This is at the moment a theoretical problem, but it seems better to
get this correct. It also matches the order in the Autoconf-based
build more closely.
Discussion: https://www.postgresql.org/message-id/flat/01a69441-af54-4822-891b-ca28e05b215a@eisentraut.org
init_params() sets up "last_value" and "is_called" for a sequence
relation holdind its metadata, based on the sequence properties in
pg_sequences. "log_cnt" is the third property that can be updated in
this routine for FormData_pg_sequence_data, tracking when WAL records
should be generated for a sequence after nextval() iterations. This
routine is called when creating or altering a sequence.
This commit refactors init_params() to not depend anymore on
FormData_pg_sequence_data, removing traces of it in sequence.c, making
easier the manipulation of metadata related to sequences. The knowledge
about "log_cnt" is replaced with a more general "reset_state" flag, to
let the caller know if the sequence state should be reset. In the case
of in-core sequences, this relates to WAL logging. We still need to
depend on FormData_pg_sequence.
Author: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/ZWlohtKAs0uVVpZ3@paquier.xyz
This test was failing because MD5 computations are not supported in
these environments. This switches the test to rely on sha256() instead,
providing the same coverage while avoiding the failure.
Oversight in f57e214d1cbb. Per buildfarm members gecko, molamola,
shikra and froghopper.
Discussion: https://postgr.es/m/aKJijS2ZRfRZiYb0@paquier.xyz
Commit c5b7ba4e6 changed things so that the ri_RootResultRelInfo field
of this struct is set for both partitions and inheritance children and
used for tuple routing and transition capture (before that commit, it
was only set for partitions to route tuples into), but failed to update
these comments.
Author: Etsuro Fujita <etsuro.fujita@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/CAPmGK14NF5CcdCmTZpxrvpvBiT0y4EqKikW1r_wAu1CEHeOmUA%40mail.gmail.com
Backpatch-through: 14
This test exercises the corner case in toast_save_datum() where CLUSTER
operations encounter duplicated TOAST references, reusing the existing
TOAST data instead of creating redundant copies.
During table rewrites like CLUSTER, both live and recently-dead versions
of a row may reference the same TOAST value. When copying the second or
later version of such a row, the system checks if a TOAST value already
exists in the new TOAST table using toastrel_valueid_exists(). If
found, toast_save_datum() sets data_todo = 0 so as redundant data is not
stored, ensuring only one copy of the TOAST value exists in the new
table.
The test relies on a combination of UPDATE, CLUSTER, and checks of the
TOAST values used before and after the relation rewrite, to make sure
that the same values are reused across the rewrite.
This is a continuation of 69f75d671475 to make sure that this corner
case keeps working should we mess with this area of the code.
Author: Nikhil Kumar Veldanda <veldanda.nikhilkumar17@gmail.com>
Discussion: https://postgr.es/m/CAFAfj_E+kw5P713S8_jZyVgQAGVFfzFiTUJPrgo-TTtJJoazQw@mail.gmail.com
When extracting a timestamp from a UUIDv7, a conversion from
milliseconds to microseconds was using the incorrect constant
NS_PER_US instead of US_PER_MS. Although both constants have the same
value, this fix improves code clarity by using the semantically
correct constant.
Backpatch to v18, where UUIDv7 was introduced.
Author: Erik Nordström <erik@tigerdata.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/CACAa4V+i07eaP6h4MHNydZeX47kkLPwAg0sqe67R=M5tLdxNuQ@mail.gmail.com
Backpatch-through: 18
Recent changes to src/tools/ci/README triggered warnings like
src/tools/ci/README:88: leftover conflict marker
Raise conflict-marker-size in .gitattributes to avoid these.
Add TAP tests that tests the LDAP Lookup of Connection Parameters
functionality in libpq. Prior to this commit, LDAP test coverage only
existed for the server-side authentication functionality and for
connection service file with parameters directly specified in the
file. The tests included here test a pg_service.conf that contains a
link to an LDAP system that contains all of the connection parameters.
Author: Andrew Jackson <andrewjackson947@gmail.com>
Discussion: https://www.postgresql.org/message-id/CAKK5BkHixcivSCA9pfd_eUp7wkLRhvQ6OtGLAYrWC%3Dk7E76LDQ%40mail.gmail.com
bms_prev_member() could attempt to access memory outside of the words[]
array in cases where the prevbit was a number < -1 or > a->nwords *
BITS_PER_BITMAPWORD + 1.
Here we add the Asserts to help draw attention to bogus callers so we're
more likely to catch them during development.
In passing, fix wording of bms_prev_member's header comment which talks
about how we expect the callers to ensure only valid prevbit values are
used.
Author: Greg Burd <greg@burd.me>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2000A717-1FFE-4031-827B-9330FB2E9065%40getmailspring.com
The SQL test added in this commit check a specific code path that had no
coverage until now. When a TOAST datum is rewritten, toast_save_datum()
has a dedicated path to make sure that a new value is allocated if it
does not exist on the TOAST table yet.
This test uses a trick with PLAIN and EXTERNAL storage, with a tuple
large enough to be toasted and small enough to fit on a page. It is
initially stored in plain more, and the rewrite forces the tuple to be
stored externally. The key point is that there is no value allocated
during the initial insert, and that there is one after the rewrite. A
second pattern checked is the reuse of the same value across rewrites,
using \gset.
A set of patches under discussion is messing up with this area of the
code, so this makes sure that such rewrite cases remain consistent
across the board.
Author: Nikhil Kumar Veldanda <veldanda.nikhilkumar17@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAFAfj_E+kw5P713S8_jZyVgQAGVFfzFiTUJPrgo-TTtJJoazQw@mail.gmail.com
Handle 'ci-os-only' occurrences in the .cirrus.star file instead of
.cirrus.tasks.yml file. Now, 'ci-os-only' occurrences are controlled
from one central place instead of dealing with them in each task.
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/20240413021221.hg53rvqlvldqh57i%40awork3.anarazel.de
Backpatch: 15-, where CI support was added
We do not want to trigger some tasks by default, to avoid using too many
compute credits. These tasks have to be manually triggered to be run. But
e.g. for cfbot we do have sufficient resources, so we always want to start
those tasks.
With this commit, an individual repository can be configured to trigger
them automatically using an environment variable defined under
"Repository Settings", for example:
REPO_CI_AUTOMATIC_TRIGGER_TASKS="mingw netbsd openbsd"
This will enable cfbot to turn them on by default when running tests for the
Commitfest app.
Backpatch this back to PG 15, even though PG 15 does not have any manually
triggered task. Keeping the CI infrastructure the same seems advantageous.
Author: Andres Freund <andres@anarazel.de>
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Co-authored-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/20240413021221.hg53rvqlvldqh57i%40awork3.anarazel.de
Backpatch-through: 16
Doing that seems rather random and unnecessary. This commit removes
those and fixes fallout, which is pretty minimal. We do need to add a
forward declaration of struct TM_IndexDeleteOp (whose full definition
appears in tableam.h) so that _bt_delitems_delete_check()'s declaration
can use it.
Author: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/202508051109.lzk3lcuzsaxo@alvherre.pgsql
Make sure the memory allocated by make_absolute_path() is freed
when SelectConfigFiles() fails. Since all the callers will exit
immediately in that case, there's no practical gain here, but
silencing Valgrind leak complaints seems useful. In any case,
it was inconsistent that only one of the failure exits did this.
Author: Aleksander Alekseev <aleksander@tigerdata.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAJ7c6TMByXE8dc7zDvDWTQjk6o-XXAdRg_RAg5CBaUOgFPV3LQ%40mail.gmail.com
This function uses an argument named "maxsize" that is only used in
assertions, being set once outside the assertion area. Recent gcc
versions with -Wunused-but-set-parameter complain about a warning when
building without assertions enabled, because of that.
In order to fix this issue, PG_USED_FOR_ASSERTS_ONLY is added to the
function argument of SerializeClientConnectionInfo(), which is the first
time we are doing so in the tree. The CI is fine with the change, but
let's see what the buildfarm has to say on the matter.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Jacob Champion <jchampion@postgresql.org>
Discussion: https://postgr.es/m/pevajesswhxafjkivoq3yvwxga77tbncghlf3gq5fvchsvfuda@6uivg25sb3nx
Backpatch-through: 16
Commit 2633dae2e48 standardized LSN formatting but mistakenly changed
the logical snapshot filename format in SnapBuildSnapshotExists() from
"%X-%X.snap" to "%08X-%08X.snap". Other code still used the original
"%X-%X.snap" format, causing the replication slot synchronization worker
to fail to find existing snapshot files and produce excessive log messages.
This commit restores the original "%X-%X.snap" format
in SnapBuildSnapshotExists() to resolve the issue.
Author: Shveta Malik <shveta.malik@gmail.com>
Discussion: https://postgr.es/m/CAHGQGwHuHPB-ucAk_Tq3uSs4Fdziu1Jp_AA_RD3m5Ycky7m48w@mail.gmail.com
Remove conditionally-compiled code for the other case.
Replace uses of FLOAT8PASSBYVAL with constant "true", mainly because
it was quite confusing in cases where the type we were dealing with
wasn't float8.
I left the associated pg_control and Pg_magic_struct fields in place.
Perhaps we should get rid of them, but it would save little, so it
doesn't seem worth thinking hard about the compatibility implications.
I just labeled them "vestigial" in places where that seemed helpful.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/1749799.1752797397@sss.pgh.pa.us
Remove conditionally-compiled code for smaller Datum widths,
and simplify comments that describe cases no longer of interest.
I also fixed up a few more places that were not using
DatumGetIntXX where they should, and made some cosmetic
adjustments such as using sizeof(int64) not sizeof(Datum)
in places where that fit better with the surrounding code.
One thing I remembered while preparing this part is that SP-GiST
stores pass-by-value prefix keys as Datums, so that the on-disk
representation depends on sizeof(Datum). That's even more
unfortunate than the existing commentary makes it out to be,
because now there is a hazard that the change of sizeof(Datum)
will break SP-GiST indexes on 32-bit machines. It appears that
there are no existing SP-GiST opclasses that are actually
affected; and if there are some that I didn't find, the number
of installations that are using them on 32-bit machines is
doubtless tiny. So I'm proceeding on the assumption that we
can get away with this, but it's something to worry about.
(gininsert.c looks like it has a similar problem, but it's okay
because the "tuples" it's constructing are just transient data
within the tuplesort step. That's pretty poorly documented
though, so I added some comments.)
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/1749799.1752797397@sss.pgh.pa.us
This patch makes sizeof(Datum) be 8 on all platforms including
32-bit ones. The objective is to allow USE_FLOAT8_BYVAL to be true
everywhere, and in consequence to remove a lot of code that is
specific to pass-by-reference handling of float8, int8, etc. The
code for abbreviated sort keys can be simplified similarly. In this
way we can reduce the maintenance effort involved in supporting 32-bit
platforms, without going so far as to actually desupport them. Since
Datum is strictly an in-memory concept, this has no impact on on-disk
storage, though an initdb or pg_upgrade will be needed to fix affected
catalog entries.
We have required platforms to support [u]int64 for ages, so this
breaks no supported platform. We can expect that this change will
make 32-bit builds a bit slower and more memory-hungry, although being
able to use pass-by-value handling of 8-byte types may buy back some
of that. But we stopped optimizing for 32-bit cases a long time ago,
and this seems like just another step on that path.
This initial patch simply forces the correct type definition and
USE_FLOAT8_BYVAL setting, and cleans up a couple of minor compiler
complaints that ensued. This is sufficient for testing purposes.
In the wake of a bunch of Datum-conversion cleanups by Peter
Eisentraut, this now compiles cleanly with gcc on a 32-bit platform.
(I'd only tested the previous version with clang, which it turns out
is less picky than gcc about width-changing coercions.)
There is a good deal of now-dead code that I'll remove in separate
follow-up patches.
A catversion bump is required because this affects initial catalog
contents (on 32-bit machines) in two ways: pg_type.typbyval changes
for some built-in types, and Const nodes in stored views/rules will
now have 8 bytes not 4 for pass-by-value types.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/1749799.1752797397@sss.pgh.pa.us
Currently the pdb file for libpq and some other libraries are named the same
for the static and shared libraries. That has been the case for a long time,
but recently started failing, after an image update started using a newer
ninja version. The issue is not itself caused by ninja, but just made visible,
as the newer version optimizes the build order and builds the shared libpq
earlier than the static library. Previously both static and shared libraries
were built at the same time, which prevented msvc from detecting the issue.
When using /DEBUG:FASTLINK pdb files cannot be updated, triggering the error.
We were using /DEBUG:FASTLINK due to running out of memory in the past, but
that was when using container based CI images, rather than full VMs.
This isn't really the correct fix (that'd be to deconflict the pdb file
names), but we'd like to get CI to become green again, and a proper fix (in
meson) will presumably take longer.
Suggested-by: Andres Freund <andres@anarazel.de>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAN55FZ1RuBhJmPWs3Oi%3D9UoezDfrtO-VaU67db5%2B0_uy19uF%2BA%40mail.gmail.com
Backpatch-through: 16
Previously our tests did not exercise kill_prior_tuples for hash and gist. For
gist some related paths were reached, but gist's implementation seems to not
work if all the dead tuples are on one page (or something like that). The
coverage for other index types was rather incidental.
Thus add an explicit test ensuring kill_prior_tuples works at all.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/lxzj26ga6ippdeunz6kuncectr5gfuugmm2ry22qu6hcx6oid6@lzx3sjsqhmt6
It turns out that on some platforms (at least current macOS, NetBSD,
OpenBSD) semget(2) will return EINVAL if there is a pre-existing
semaphore set with the same key and too few semaphores. Our code
expects EEXIST in that case and treats EINVAL as a hard failure,
resulting in failure during initdb or postmaster start.
POSIX does document EINVAL for too-few-semaphores-in-set, and is
silent on its priority relative to EEXIST, so this behavior arguably
conforms to spec. Nonetheless it's quite problematic because EINVAL
is also documented to mean that nsems is greater than the system's
limit on the number of semaphores per set (SEMMSL). If that is
where the problem lies, retrying would just become an infinite loop.
To resolve this contradiction, retry after EINVAL, but also install a
loop limit that will make us give up regardless of the specific errno
after trying 1000 different keys. (1000 is a pretty arbitrary number,
but it seems like it should be sufficient.) I like this better than
the previous infinite-looping behavior, since it will also keep us out
of trouble if (say) we get EACCES due to a system-level permissions
problem rather than anything to do with a specific semaphore set.
This problem has only been observed in the field in PG 17, which uses
a higher nsems value than other branches (cf. 38da05346, 810a8b1c8).
That makes it possible to get the failure if a new v17 postmaster
has a key collision with an existing postmaster of another branch.
In principle though, we might see such a collision against a semaphore
set created by some other application, in which case all branches are
vulnerable on these platforms. Hence, backpatch.
Reported-by: Gavin Panella <gavinpanella@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CALL7chmzY3eXHA7zHnODUVGZLSvK3wYCSP0RmcDFHJY8f28Q3g@mail.gmail.com
Backpatch-through: 13
Set body indent to 0 to make use of the horizontal space better (and
some reviewers thought it was also more readable).
Add some left and right margin to the warning boxes, otherwise they
drift too far off the page in combination with the above change.
Author: Noboru Saito <noborusai@gmail.com>
Reviewed-by: Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com>
Reviewed-by: Florents Tselai <florents.tselai@gmail.com>
Reviewed-by: Tatsuo Ishii <ishii@postgresql.org>
Discussion: https://www.postgresql.org/message-id/flat/CAAM3qnLyMUD79XF+SqAVwWCwURCF3hyuFY9Ki9Csbqs-zMwwnw@mail.gmail.com
The tests fixed in this commit were changing the sampling setting of a
foreign server, but then were analyzing a local table instead of a
foreign table, meaning that the test was not running for its original
purpose.
This commit changes the ANALYZE commands to analyze the foreign table,
and changes the foreign table definition to point to a valid remote
table. Attempting to analyze the foreign table "analyze_ftable" would
have failed before this commit, because "analyze_rtable1" is not defined
on the remote side.
Issue introduced by 8ad51b5f446b.
Author: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://postgr.es/m/CADkLM=cpUiJ3QF7aUthTvaVMmgQcm7QqZBRMDLhBRTR+gJX-Og@mail.gmail.com
Backpatch-through: 16
fb9f955025f optimized code generation by using specialized variants of
ExecSeqScan* for [not] having a qual, projection etc. This allowed the
compiler to optimize the code out the code for qual / projection. However, as
observed by David Rowley at the time, the compiler couldn't prove the
opposite, i.e. that the qual etc *are* present.
By using pg_assume(), introduced in d65eb5b1b84, we can tell the compiler that
the relevant variables are non-null.
This reduces the code size to a surprising degree and seems to lead to a small
but reproducible performance gain.
Reviewed-by: Amit Langote <amitlangote09@gmail.com> Discussion:
https://postgr.es/m/CA+HiwqFk-MbwhfX_kucxzL8zLmjEt9MMcHi2YF=DyhPrSjsBEA@mail.gmail.com
Without using stamp files, meson lists the generated headers as the dependency
for every .c file, bloating build.ninja by more than 2x. Processing all the
dependencies also increases the time to generate build.ninja.
The immediate benefit is that this makes re-configuring and clean builds a bit
faster. The main motivation however is that I have other patches that
introduce additional build targets that further would increase the size of
build.ninja, making re-configuring more noticeably slower.
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/cgkdgvzdpinkacf4v33mky7tbmk467oda5dd4dlmucjjockxzi@xkqfvjoq4uiy
A malicious server could inject psql meta-commands into plain-text
dump output (i.e., scripts created with pg_dump --format=plain,
pg_dumpall, or pg_restore --file) that are run at restore time on
the machine running psql. To fix, introduce a new "restricted"
mode in psql that blocks all meta-commands (except for \unrestrict
to exit the mode), and teach pg_dump, pg_dumpall, and pg_restore to
use this mode in plain-text dumps.
While at it, encourage users to only restore dumps generated from
trusted servers or to inspect it beforehand, since restoring causes
the destination to execute arbitrary code of the source superusers'
choice. However, the client running the dump and restore needn't
trust the source or destination superusers.
Reported-by: Martin Rakhmanov
Reported-by: Matthieu Denais <litezeraw@gmail.com>
Reported-by: RyotaK <ryotak.mail@gmail.com>
Suggested-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Security: CVE-2025-8714
Backpatch-through: 13
Maliciously-crafted object names could achieve SQL injection during
restore. CVE-2012-0868 fixed this class of problem at the time, but
later work reintroduced three cases. Commit
bc8cd50fefd369b217f80078585c486505aafb62 (back-patched to v11+ in
2023-05 releases) introduced the pg_dump case. Commit
6cbdbd9e8d8f2986fde44f2431ed8d0c8fce7f5d (v12+) introduced the two
pg_dumpall cases. Move sanitize_line(), unchanged, to dumputils.c so
pg_dumpall has access to it in all supported versions. Back-patch to
v13 (all supported versions).
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Backpatch-through: 13
Security: CVE-2025-8715
Commit e2d4ef8de86 (the fix for CVE-2017-7484) added security checks
to the selectivity estimation functions to prevent them from running
user-supplied operators on data obtained from pg_statistic if the user
lacks privileges to select from the underlying table. In cases
involving inheritance/partitioning, those checks were originally
performed against the child RTE (which for plain inheritance might
actually refer to the parent table). Commit 553d2ec2710 then extended
that to also check the parent RTE, allowing access if the user had
permissions on either the parent or the child. It turns out, however,
that doing any checks using the child RTE is incorrect, since
securityQuals is set to NULL when creating an RTE for an inheritance
child (whether it refers to the parent table or the child table), and
therefore such checks do not correctly account for any RLS policies or
security barrier views. Therefore, do the security checks using only
the parent RTE. This is consistent with how RLS policies are applied,
and the executor's ACL checks, both of which use only the parent
table's permissions/policies. Similar checks are performed in the
extended stats code, so update that in the same way, centralizing all
the checks in a new function.
In addition, note that these checks by themselves are insufficient to
ensure that the user has access to the table's data because, in a
query that goes via a view, they only check that the view owner has
permissions on the underlying table, not that the current user has
permissions on the view itself. In the selectivity estimation
functions, there is no easy way to navigate from underlying tables to
views, so add permissions checks for all views mentioned in the query
to the planner startup code. If the user lacks permissions on a view,
a permissions error will now be reported at planner-startup, and the
selectivity estimation functions will not be run.
Checking view permissions at planner-startup in this way is a little
ugly, since the same checks will be repeated at executor-startup.
Longer-term, it might be better to move all the permissions checks
from the executor to the planner so that permissions errors can be
reported sooner, instead of creating a plan that won't ever be run.
However, such a change seems too far-reaching to be back-patched.
Back-patch to all supported versions. In v13, there is the added
complication that UPDATEs and DELETEs on inherited target tables are
planned using inheritance_planner(), which plans each inheritance
child table separately, so that the selectivity estimation functions
do not know that they are dealing with a child table accessed via its
parent. Handle that by checking access permissions on the top parent
table at planner-startup, in the same way as we do for views. Any
securityQuals on the top parent table are moved down to the child
tables by inheritance_planner(), so they continue to be checked by the
selectivity estimation functions.
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Backpatch-through: 13
Security: CVE-2025-8713
The internal queue of buffers could become corrupted in a rare edge case
that failed to invalidate an entry, causing a stale buffer to be
"forwarded" to StartReadBuffers(). This is a simple fix for the
immediate problem.
A small API change might be able to remove this and related fragility
entirely, but that will have to wait a bit.
Defect in commit ed0b87ca.
Bug: 19006
Backpatch-through: 18
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>
Discussion: https://postgr.es/m/19006-80fcaaf69000377e%40postgresql.org
Fix a couple more places where an explicit Datum conversion
is needed (not clear how we missed these in ff89e182d and
previous commits).
Replace the minority usage "(Datum) NULL" with "(Datum) 0".
The former depends on the assumption that Datum is the same
width as Pointer, the latter doesn't. Anyway consistency
is a good thing.
This is, I believe, the last of the notational mop-up needed
before we can consider changing Datum to uint64 everywhere.
It's also important cleanup for more aggressive ideas such
as making Datum a struct.
Discussion: https://postgr.es/m/1749799.1752797397@sss.pgh.pa.us
Discussion: https://postgr.es/m/8246d7ff-f4b7-4363-913e-827dadfeb145@eisentraut.org
Add various missing conversions from and to Datum. The previous code
mostly relied on implicit conversions or its own explicit casts
instead of using the correct DatumGet*() or *GetDatum() functions.
We think these omissions are harmless. Some actual bugs that were
discovered during this process have been committed
separately (80c758a2e1d, fd2ab03fea2).
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/8246d7ff-f4b7-4363-913e-827dadfeb145%40eisentraut.org
Remove useless DatumGetFoo() and FooGetDatum() calls. These are
places where no conversion from or to Datum was actually happening.
We think these extra calls covered here were harmless. Some actual
bugs that were discovered during this process have been committed
separately (80c758a2e1d, 2242b26ce47).
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/8246d7ff-f4b7-4363-913e-827dadfeb145%40eisentraut.org
before checking ->has_scram_keys. MyProcPort is NULL in background
workers. So this could crash for example if a background worker
accessed a suitable configured foreign table.
Author: Alexander Pyhalov <a.pyhalov@postgrespro.ru>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/27b29a35-9b96-46a9-bc1a-914140869dac%40gmail.com
Commit 1443b6c0e introduced buildfarm breakage for Autoconf animals,
which expect to be able to run `make installcheck` on the libpq-oauth
directory even if libcurl support is disabled. Some other Meson animals
complained of a missing -lm link as well.
Since this is the day before a freeze, revert for now and come back
later.
Discussion: https://postgr.es/m/CAOYmi%2BnCkoh3zB%2BGkZad44%3DFNskwUg6F1kmuxqQZzng7Zgj5tw%40mail.gmail.com
To better record the internal behaviors of oauth-curl.c, add a unit test
suite for the socket and timer handling code. This is all based on TAP
and driven by our existing Test::More infrastructure.
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://postgr.es/m/CAOYmi+nDZxJHaWj9_jRSyf8uMToCADAmOfJEggsKW-kY7aUwHA@mail.gmail.com
Tracking down the bugs that led to the addition of comb_multiplexer()
and drain_timer_events() was difficult, because an inefficient flow is
not visibly different from one that is working properly. To help
maintainers notice when something has gone wrong, track the number of
calls into the flow as part of debug mode, and print the total when the
flow finishes.
A new test makes sure the total count is less than 100. (We expect
something on the order of 10.) This isn't foolproof, but it is able to
catch several regressions in the logic of the prior two commits, and
future work to add TLS support to the oauth_validator test server should
strengthen it as well.
Backpatch-through: 18
Discussion: https://postgr.es/m/CAOYmi+nDZxJHaWj9_jRSyf8uMToCADAmOfJEggsKW-kY7aUwHA@mail.gmail.com
In a case similar to the previous commit, an expired timer can remain
permanently readable if Curl does not remove the timeout itself. Since
that removal isn't guaranteed to happen in real-world situations,
implement drain_timer_events() to reset the timer before calling into
drive_request().
Moving to drain_timer_events() happens to fix a logic bug in the
previous caller of timer_expired(), which treated an error condition as
if the timer were expired instead of bailing out.
The previous implementation of timer_expired() gave differing results
for epoll and kqueue if the timer was reset. (For epoll, a reset timer
was considered to be expired, and for kqueue it was not.) This didn't
previously cause problems, since timer_expired() was only called while
the timer was known to be set, but both implementations now use the
kqueue logic.
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CAOYmi+nDZxJHaWj9_jRSyf8uMToCADAmOfJEggsKW-kY7aUwHA@mail.gmail.com
If Curl needs to switch the direction of a socket's registration (e.g.
from CURL_POLL_IN to CURL_POLL_OUT), it expects the old registration to
be discarded. For epoll, this happened via EPOLL_CTL_MOD, but for
kqueue, the old registration would remain if it was not explicitly
removed by Curl.
Explicitly remove the opposite-direction event during registrations. (If
that event doesn't exist, we'll just get an ENOENT, which will be
ignored by the same code that handles CURL_POLL_REMOVE.) A few
assertions are also added to strengthen the relationship between the
number of events added, the number of events pulled off the queue, and
the lengths of the kevent arrays.
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CAOYmi+nDZxJHaWj9_jRSyf8uMToCADAmOfJEggsKW-kY7aUwHA@mail.gmail.com
If a socket is added to the kqueue, becomes readable/writable, and
subsequently becomes non-readable/writable again, the kqueue itself will
remain readable until either the socket registration is removed, or the
stale event is cleared via a call to kevent().
In many simple cases, Curl itself will remove the socket registration
quickly, but in real-world usage, this is not guaranteed to happen. The
kqueue can then remain stuck in a permanently readable state until the
request ends, which results in pointless wakeups for the client and
wasted CPU time.
Implement comb_multiplexer() to call kevent() and unstick any stale
events that would cause unnecessary callbacks. This is called right
after drive_request(), before we return control to the client to wait.
Suggested-by: Thomas Munro <thomas.munro@gmail.com>
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CAOYmi+nDZxJHaWj9_jRSyf8uMToCADAmOfJEggsKW-kY7aUwHA@mail.gmail.com
The code used
return (Selectivity) 0.0;
where
PG_RETURN_FLOAT8(0.0);
would be correct.
On 64-bit systems, these are pretty much equivalent, but on 32-bit
systems, PG_RETURN_FLOAT8() correctly produces a pointer, but the old
wrong code would return a null pointer, possibly leading to a crash
elsewhere.
We think this code is actually not reachable because bqarr_in won't
accept an empty query, and there is no other function that will
create query_int values. But better be safe and not let such
incorrect code lie around.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/8246d7ff-f4b7-4363-913e-827dadfeb145%40eisentraut.org
This function is called from ATExecAttachPartition/ATExecAddInherit,
which prevent tables with row-level triggers with transition tables from
becoming partitions or inheritance children, to check if there is such a
trigger on the given table, but failed to check if a found trigger is
row-level, causing the caller functions to needlessly prevent a table
with only a statement-level trigger with transition tables from becoming
a partition or inheritance child. Repair.
Oversight in commit 501ed02cf.
Author: Etsuro Fujita <etsuro.fujita@gmail.com>
Discussion: https://postgr.es/m/CAPmGK167mXzwzzmJ_0YZ3EZrbwiCxtM1vogH_8drqsE6PtxRYw%40mail.gmail.com
Backpatch-through: 13
Previously, pg_dump --filter could misinterpret invalid object types
in the filter file as valid ones. For example, the invalid object type
"table-data" (likely a typo for the valid "table_data") could be
mistakenly recognized as "table", causing pg_dump to succeed
when it should have failed.
This happened because pg_dump identified keywords as sequences of
ASCII alphabetic characters, treating non-alphabetic characters
(like hyphens) as keyword boundaries. As a result, "table-data" was
parsed as "table".
To fix this, pg_dump --filter now treats keywords as strings of
non-whitespace characters, ensuring invalid types like "table-data"
are correctly rejected.
Back-patch to v17, where the --filter option was introduced.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>
Reviewed-by: Srinath Reddy <srinath2133@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAHGQGwFzPKUwiV5C-NLBqz1oK1+z9K8cgrF+LcxFem-p3_Ftug@mail.gmail.com
Backpatch-through: 17
Commit 9e6104c66 disallowed transition tables on foreign tables, but
failed to account for cases where a foreign table is a child table of a
partitioned/inherited table on which transition tables exist, leading to
incorrect transition tuples collected from such foreign tables for
queries on the parent table triggering transition capture. This
occurred not only for inherited UPDATE/DELETE but for partitioned INSERT
later supported by commit 3d956d956, which should have handled it at
least for the INSERT case, but didn't.
To fix, modify ExecAR*Triggers to throw an error if the given relation
is a foreign table requesting transition capture. Also, this commit
fixes make_modifytable so that in case of an inherited UPDATE/DELETE
triggering transition capture, FDWs choose normal operations to modify
child foreign tables, not DirectModify; which is needed because they
would otherwise skip the calls to ExecAR*Triggers at execution, causing
unexpected behavior.
Author: Etsuro Fujita <etsuro.fujita@gmail.com>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CAPmGK14QJYikKzBDCe3jMbpGENnQ7popFmbEgm-XTNuk55oyHg%40mail.gmail.com
Backpatch-through: 13
Dropping twice a pgstats entry should not happen, and the error report
generated was missing the "generation" counter (tracking when an entry
is reused) that has been added in 818119afccd3.
Like d92573adcb02, backpatch down to v15 where this information is
useful to have, to gather more information from instances where the
problem shows up. A report has shown that this error path has been
reached on a standby based on 17.3, for a relation stats entry and an
OID close to wraparound.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/CAN4RuQvYth942J2+FcLmJKgdpq6fE5eqyFvb_PuskxF2eL=Wzg@mail.gmail.com
Backpatch-through: 15
libpq-oauth was missing from the installed_targets list, so
$ ninja clean && ninja install-quiet
failed with the error message
ERROR: File 'src/interfaces/libpq-oauth/libpq-oauth.a' could not be found
It seems a little odd to have to tell Meson what's missing, since it
clearly knows how to build that file during regular installation. But
the "quiet" variant we've created must use --no-rebuild, to avoid
spawning concurrent ninja processes that would step on each other.
Reported-by: Andres Freund <andres@anarazel.de>
Backpatch-through: 18
Discussion: https://postgr.es/m/hbpqdwxkfnqijaxzgdpvdtp57s7gwxa5d6sbxswovjrournlk6%404jnb2gzan4em
Although the "Floating-Point Types" section says that "float" data
type is taken to mean "double precision", this information was not
reflected in the data type table that lists all data type aliases.
Reported-by: alexander.kjall@hafslund.no
Author: Euler Taveira <euler@eulerto.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/175456294638.800.12038559679827947313@wrigleys.postgresql.org
Backpatch-through: 13
This adds a few more functions to int128.h, allowing more of numeric.c
to use 128-bit integers on all platforms.
Specifically, int64_div_fast_to_numeric() and the following aggregate
functions can now use 128-bit integers for improved performance on all
platforms, rather than just platforms with native support for int128:
- SUM(int8)
- AVG(int8)
- STDDEV_POP(int2 or int4)
- STDDEV_SAMP(int2 or int4)
- VAR_POP(int2 or int4)
- VAR_SAMP(int2 or int4)
In addition to improved performance on platforms lacking native
128-bit integer support, this significantly simplifies this numeric
code by allowing a lot of conditionally compiled code to be deleted.
A couple of numeric functions (div_var_int64() and sqrt_var()) still
contain conditionally compiled 128-bit integer code that only works on
platforms with native 128-bit integer support. Making those work more
generally would require rolling our own higher precision 128-bit
division, which isn't supported for now.
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Discussion: https://postgr.es/m/CAEZATCWgBMc9ZwKMYqQpaQz2X6gaamYRB+RnMsUNcdMcL2Mj_w@mail.gmail.com
Small touch-up on commits 25505082f0e and 50fd428b2b9. Fix the
formatting of the example messages in the documentation and adjust the
wording to match the code.
Use Min(NBuffers, MAX_CHECKPOINT_REQUESTS) instead of NBuffers in
CheckpointerShmemSize() to match the actual array size limit set in
CheckpointerShmemInit(). This prevents wasting shared memory when
NBuffers > MAX_CHECKPOINT_REQUESTS. Also, fix the comment.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1439188.1754506714%40sss.pgh.pa.us
Author: Xuneng Zhou <xunengzhou@gmail.com>
Co-authored-by: Alexander Korotkov <aekorotkov@gmail.com>
Recent ICU versions have added U_SHOW_CPLUSPLUS_HEADER_API, and we need
to set this to zero as well to hide the ICU C++ APIs from pg_locale.h
Per discussion, we want cpluspluscheck to work cleanly in backbranches,
so backpatch both this and its predecessor commit ed26c4e25a4 to all
supported versions.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1115793.1754414782%40sss.pgh.pa.us
Backpatch-through: 13
In the non-native code in int128_add_int64_mul_int64(), use signed
64-bit integer multiplication instead of unsigned multiplication for
the first three product terms. This simplifies the code needed to add
each product term to the result, leading to more compact and efficient
code. The actual performance gain is quite modest, but it seems worth
it to improve the code's readability.
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Discussion: https://postgr.es/m/CAEZATCWgBMc9ZwKMYqQpaQz2X6gaamYRB+RnMsUNcdMcL2Mj_w@mail.gmail.com
On platforms without native 128-bit integer support, simplify the test
for carry in int128_add_uint64() by noting that the low-part addition
is unsigned integer arithmetic, which is just modular arithmetic.
Therefore the test for carry can simply be written as "new value < old
value" (i.e., a test for modular wrap-around). This can then be made
branchless so that on modern compilers it produces the same machine
instructions as native 128-bit addition, making it significantly
simpler and faster.
Similarly, the test for carry in int128_add_int64() can be written in
much the same way, but with an extra term to compensate for the sign
of the value being added. Again, on modern compilers this leads to
branchless code, often identical to the native 128-bit integer
addition machine code.
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Discussion: https://postgr.es/m/CAEZATCWgBMc9ZwKMYqQpaQz2X6gaamYRB+RnMsUNcdMcL2Mj_w@mail.gmail.com
Commit d85ce012f99f has added some new error handling code to
date_trunc() of timestamp, timestamptz, and interval with infinite
values.
However, the new test cases added by that commit did not actually test
all of the new code, missing coverage for the following cases:
1) For timestamp without time zone:
1-1) infinite value with valid unit
1-2) infinite value with unsupported unit
1-3) finite value with unsupported unit, for a code path older than
d85ce012f99f.
2) For timestamp with time zone, without a time zone specified for the
truncation:
2-1) infinite value with valid unit
2-2) infinite value with unsupported unit
2-3) finite value with unsupported unit, for a code path older than
d85ce012f99f.
3) For timestamp with time zone, with a time zone specified for the
truncation:
3-1) infinite value with valid unit.
3-2) infinite value with unsupported unit.
This commit also provides coverage for the bug fixed in 2242b26ce472,
through cases 2-1) and 3-1), when using an infinite value with a valid
unit, with[out] the optional time zone parameter used for the
truncation.
Author: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/2d320b6f-b4af-4fbc-9eec-5d0fa15d187b@eisentraut.org
Discussion: https://postgr.es/m/4bf60a84-2862-4a53-acd5-8eddf134a60e@eisentraut.org
Backpatch-through: 18
The code used a PG_RETURN_TIMESTAMPTZ() where the return type is
TimestampTz and not a Datum.
On 64-bit systems, there is no effect since this just ends up casting
64-bit integers back and forth. On 32-bit systems, timestamptz is
pass-by-reference. PG_RETURN_TIMESTAMPTZ() allocates new memory and
returns the address, meaning that the caller could interpret this as a
timestamp value.
The effect is using "date_trunc(..., 'infinity'::timestamptz) will
return random values (instead of the correct return value 'infinity').
Bug introduced in commit d85ce012f99f.
Author: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2d320b6f-b4af-4fbc-9eec-5d0fa15d187b@eisentraut.org
Discussion: https://postgr.es/m/4bf60a84-2862-4a53-acd5-8eddf134a60e@eisentraut.org
Backpatch-through: 18
This commit makes use of the existing PqMsg_* macros in more places
and adds new PqReplMsg_* and PqBackupMsg_* macros for use in
special replication and backup messages, respectively.
Author: Dave Cramer <davecramer@gmail.com>
Co-authored-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Discussion: https://postgr.es/m/aIECfYfevCUpenBT@nathan
Discussion: https://postgr.es/m/CAFcNs%2Br73NOUb7%2BqKrV4HHEki02CS96Z%2Bx19WaFgE087BWwEng%40mail.gmail.com
The name "namspace" looks like a typo, but it was presumably meant
to avoid using the "namespace" C++ keyword. This commit renames
the parameter to "nameSpace" to prevent future confusion while
still avoiding the keyword.
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/aJJxpfsDfiQ1VbJ5%40nathan
This rearranges the code in src/include/common/int128.h, so that the
native and non-native implementations of each function are together
inside the function body (as they are in src/include/common/int.h),
rather than being in separate parts of the file.
This improves readability and maintainability, making it easier to
compare the native and non-native implementations, and avoiding the
need to duplicate every function comment and declaration.
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Discussion: https://postgr.es/m/CAEZATCWgBMc9ZwKMYqQpaQz2X6gaamYRB+RnMsUNcdMcL2Mj_w@mail.gmail.com
These were introduced (commit efdc7d74753) at the same time as we were
moving to using the standard inttypes.h format macros (commit
a0ed19e0a9e). It doesn't seem useful to keep a new already-deprecated
interface like this with only a few users, so remove the new symbols
again and have the callers use PRIx64.
(Also, INT64_HEX_FORMAT was kind of a misnomer, since hex formats all
use unsigned types.)
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/0ac47b5d-e5ab-4cac-98a7-bdee0e2831e4%40eisentraut.org
This creates a new test module src/test/modules/test_int128 and moves
src/tools/testint128.c into it so that it can be built using the
normal build system, allowing the 128-bit integer arithmetic functions
in src/include/common/int128.h to be tested automatically. For now,
the tests are skipped on platforms that don't have native int128
support.
While at it, fix the test128 union in the test code: the "hl" member
of test128 was incorrectly defined to be a union instead of a struct,
which meant that the tests were only ever setting and checking half of
each 128-bit integer value.
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: John Naylor <johncnaylorls@gmail.com>
Discussion: https://postgr.es/m/CAEZATCWgBMc9ZwKMYqQpaQz2X6gaamYRB+RnMsUNcdMcL2Mj_w@mail.gmail.com
toast_save_datum() has for a very long time some code able to handle
short varlenas (values up to 126 bytes reduced to a 1-byte header),
converting such varlenas to an external on-disk TOAST pointer with the
value saved uncompressed in the secondary TOAST relation.
There was zero coverage for this code path. This commit adds a test
able to exercise it, relying on two external attributes, one with a low
toast_tuple_target, so as it is possible to trigger the threshold for
the insertion of short varlenas into the TOAST relation.
Author: Nikhil Kumar Veldanda <veldanda.nikhilkumar17@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aJAl7-NvIk0kZByz@paquier.xyz
ALTER TABLE ... SET EXPRESSION AS removes statistics for the target column,
so running ANALYZE afterward is recommended. But this was previously not
documented, even though a similar recommendation exists for
ALTER TABLE ... SET DATA TYPE, which also clears the column's statistics.
This commit updates the documentation to include the ANALYZE recommendation
for SET EXPRESSION AS.
Since v18, virtual generated columns are supported, and these columns never
have statistics. Therefore, ANALYZE is not needed after SET DATA TYPE or
SET EXPRESSION AS when used on virtual generated columns. This commit also
updates the documentation to clarify that ANALYZE is unnecessary in such cases.
Back-patch the ANALYZE recommendation for SET EXPRESSION AS to v17
where the feature was introduced, and the note about virtual generated
columns to v18 where those columns were added.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/20250804151418.0cf365bd2855d606763443fe@sraoss.co.jp
Backpatch-through: 17
Following commit e035863c9a0, building with -O0 began triggering
warnings about potentially uninitialized 'workbuf' usage. While
theoretically the initialization isn't necessary since VARDATA()
doesn't access the contents of the pointed-to object, this commit
explicitly initializes the workbuf variable to suppress the warning.
Buildfarm members adder and flaviventris have shown the warning.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAD21AoCOZxfqnNgfM5yVKJZYnOq5m2Q96fBGy1fovEqQ9V4OZA@mail.gmail.com
The result of "DirectFunctionCall1(numeric_float8, d)" is already in
Datum form, but the code was incorrectly applying PG_RETURN_FLOAT8()
to it. On machines where float8 is pass-by-reference, this would
result in complete garbage, since an unpredictable pointer value
would be treated as an integer and then converted to float. It's not
entirely clear how much of a problem would ensue on 64-bit hardware,
but certainly interpreting a float8 bitpattern as uint64 and then
converting that to float isn't the intended behavior.
As luck would have it, even the complete-garbage case doesn't break
BRIN indexes, since the results are only used to make choices about
how to merge values into ranges: at worst, we'd make poor choices
resulting in an inefficient index. Doubtless that explains the lack
of field complaints. However, users with BRIN indexes that use the
numeric_minmax_multi_ops opclass may wish to reindex in hopes of
making their indexes more efficient.
Author: Peter Eisentraut <peter@eisentraut.org>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2093712.1753983215@sss.pgh.pa.us
Backpatch-through: 14
This commit introduces a new column backup_type that indicates the
type of backup being performed: either 'full' or 'incremental'.
Bump catalog version.
Author: Shinya Kato <shinya11.kato@gmail.com>
Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/CAOzEurQuzbHwTj1ehk1a+eeQDidJPyrE5s6mYumkjwjZnurhkQ@mail.gmail.com
During CREATE DATABASE, if changing the locale provider, require that
a new locale is specified rather than trying to reinterpret the
template's locale using the new provider.
This only affects the behavior when the template uses the builtin
provider and CREATE DATABASE specifies the ICU provider without
specifying the locale. Previously, that may have succeeded due to
loose validation by ICU, whereas now that will cause an error. Because
it can cause an error, backport only to unreleased versions.
Discussion: https://postgr.es/m/5038b33a6dc639009f4b3d43fa6ae0c5ba9e04f7.camel@j-davis.com
Backpatch-through: 18
We've only bothered converting the external interfaces, not the
endian-dependent internal macros (which should not be used by any
callers other than the interface functions in this header, anyway).
The VARTAG_1B_E() changes are required for C++ compatibility.
Author: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/928ea48f-77c6-417b-897c-621ef16685a6@eisentraut.org
Macros like VARDATA() and VARSIZE() should be thought of as taking
values of type pointer to struct varlena or some other related struct.
The way they are implemented, you can pass anything to it and it will
cast it right. But this is in principle incorrect. To fix, add the
required DatumGetPointer() calls. Or in a couple of cases, remove
superfluous PointerGetDatum() calls.
It is planned in a subsequent patch to change macros like VARDATA()
and VARSIZE() to inline functions, which will enforce stricter typing.
This is in preparation for that.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/928ea48f-77c6-417b-897c-621ef16685a6%40eisentraut.org
Previously, specifying the publication option 'publish_generated_columns'
without an explicit value would incorrectly default to 'stored', which is
not the intended behavior.
This patch fixes the issue by raising an ERROR when no value is provided
for 'publish_generated_columns', ensuring that users must explicitly
specify a valid option.
Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Backpatch-through: 18, where it was introduced
Discussion: https://postgr.es/m/CAHut+PsCUCWiEKmB10DxhoPfXbF6jw5RD9ib2LuaQeA_XraW7w@mail.gmail.com
Import usleep, which, due to an oversight in oversight in commit
48796a98d5ae was used but not imported.
Correct the comparison string used in two logfile checks. Previously, it
was incorrect and thus the test could never have failed.
Also wordsmith a comment to make it clear when hot_standby_feedback is
meant to be on during the test scenarios.
Reported-by: Melanie Plageman <melanieplageman@gmail.com>
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/flat/CAAKRu_YO2mEm%3DZWZKPjTMU%3DgW5Y83_KMi_1cr51JwavH0ctd7w%40mail.gmail.com
Backpatch-through: 16
func.sgml has grown over the years to the point where it is very
difficult to manage. This commit splits out each sect1 piece into its
own file, which is then included in the main file, so that the built
documentation should be identical to the pre-split documentation. All
these new files are placed in a new "func" subdirectory, and the
previous func.sgml is removed.
Done using scripts developed by:
Author: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxFgAh1--EMwOjMuANe=VTmjkNaZjH+AzSe04-8ZCGiESA@mail.gmail.com
When prep_buildtree is used to prepare a build tree when the source
directory already contains another build tree, then it will produce
the directory structure of the first build tree in the second one.
For example, if there is
postgresql/
postgresql/build1/
and a new build tree postgresql/build2/ is prepared, then this will
produce
postgresql/build2/build1/
because it just copies all subdirectories of the source tree. This is
not harmful, but it's pretty stupid and can be confusing, and it slows
down prep_buildtree when there are many build trees.
When prep_buildtree was first created, it was more common for the
build tree to be outside the source tree, in which case this is not a
problem. But now with the arrival of meson, it appears to be more
common (and also the way it is documented in the PostgreSQL
documentation) to have the build tree inside the source tree. (To be
clear: This change does not affect meson at all. But it would be an
issue for example if you have a meson build tree and a configure build
tree under the same source tree.)
To fix this, change the "find" command to process only those top-level
directories that we know about (namely config, contrib, doc, src). (I
alternatively looked for ways to ignore directories that looked like
build directories, but that seemed extremely complicated.) With that,
we can also remove the code that ignores directories related to
source-control management.
In passing, also remove the workaround for handling prebuilt docs,
since that has been obsolete since commit 54fac0e5050.
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/8b96b07f-1f48-46e9-b26e-01b2c9e4ac8d%40eisentraut.org
This name is only used as documentation, and using this name is
consistent with its byte being a 'w'. Renaming it would also make the
use of a symbolic name based on the word "WAL" rather than the obsolete
"XLog" term more consistent, per future commits along the lines of
37c7a7eeb6d1, 4a68d5008869, f4b54e1ed985.
Discussion: https://postgr.es/m/aIECfYfevCUpenBT@nathan
Previously, enabling sync_replication_slots while wal_level was not set
to logical could cause the server to shut down. This was because
the postmaster performed a configuration check before launching
the slot synchronization worker and raised an ERROR if the settings
were incompatible. Since ERROR is treated as FATAL in the postmaster,
this resulted in the entire server shutting down unexpectedly.
This commit changes the postmaster to log that message with a LOG-level
instead of raising an ERROR, allowing the server to continue running
even with the misconfiguration.
Back-patch to v17, where slot synchronization was introduced.
Reported-by: Hugo DUBOIS <hdubois@scaleway.com>
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Hugo DUBOIS <hdubois@scaleway.com>
Reviewed-by: Shveta Malik <shveta.malik@gmail.com>
Discussion: https://postgr.es/m/CAH0PTU_pc3oHi__XESF9ZigCyzai1Mo3LsOdFyQA4aUDkm01RA@mail.gmail.com
Backpatch-through: 17
It's possible to use a CHECK (col IS NOT NULL) constraint to skip
scanning a table for nulls when adding a NOT NULL constraint on the same
column. However, if the CHECK constraint is dropped on the same command
that the NOT NULL is added, this fails, i.e., makes the NOT NULL addition
slow. The best we can do about it at this stage is to document this so
that users aren't taken by surprise.
(In Postgres 18 you can directly add the NOT NULL constraint as NOT
VALID instead, so there's no longer much use for the CHECK constraint,
therefore no point in building mechanism to support the case better.)
Reported-by: Andrew <psy2000usa@yahoo.com>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/175385113607.786.16774570234342968908@wrigleys.postgresql.org
This enhancement builds upon the infrastructure introduced in commit
228c370868, which enables the preservation of deleted tuples and their
origin information on the subscriber. This capability is crucial for
handling concurrent transactions replicated from remote nodes.
The update introduces support for detecting update_deleted conflicts
during the application of update operations on the subscriber. When an
update operation fails to locate the target row-typically because it has
been concurrently deleted-we perform an additional table scan. This scan
uses the SnapshotAny mechanism and we do this additional scan only when
the retain_dead_tuples option is enabled for the relevant subscription.
The goal of this scan is to locate the most recently deleted tuple-matching
the old column values from the remote update-that has not yet been removed
by VACUUM and is still visible according to our slot (i.e., its deletion
is not older than conflict-detection-slot's xmin). If such a tuple is
found, the system reports an update_deleted conflict, including the origin
and transaction details responsible for the deletion.
This provides a groundwork for more robust and accurate conflict
resolution process, preventing unexpected behavior by correctly
identifying cases where a remote update clashes with a deletion from
another origin.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com
Coverity complained that the "errtrace" string is leaked if we return
early because backtrace_symbols fails. Another criticism that could
be leveled at this is that not providing any hint of what happened is
user-unfriendly. Fix that.
The odds of a leak here are small, and typically it wouldn't matter
anyway since the leak will be in ErrorContext which will soon get
reset. So I'm not feeling a need to back-patch.
If ndatums is zero, the code would allocate zero-length boundKinds
and boundDatums chunks, which would have nothing pointing to them,
leading to Valgrind complaints. Rearrange the code to avoid the
useless pallocs, and also to not bother computing byval/typlen when
they aren't used.
I'm unsure why I didn't see this in my Valgrind testing back in May.
This code hasn't changed since then, but maybe we added a regression
test that reaches this edge case. Or possibly I just failed to
notice the reports, which do say "0 bytes lost".
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
CompleteCachedPlan intentionally doesn't worry about small
leaks from PlanCacheComputeResultDesc. However, Valgrind
knows nothing of engineering tradeoffs and complains anyway.
Silence it by doing things the hard way if USE_VALGRIND.
I don't really love this patch, because it makes the handling
of plansource->resultDesc different from the handling of query
dependencies and search_path just above, which likewise are willing
to accept small leaks into the cached plan's context. However,
those cases aren't provoking Valgrind complaints. (Perhaps in a
CLOBBER_CACHE_ALWAYS build, they would?) For the moment, this
makes the src/pl/plpgsql tests leak-free according to Valgrind.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
Like the situation with function cache loading, text search
dictionary loading functions tend to leak some cruft into the
dictionary's long-lived cache context. To judge by the examples in
the core regression tests, not very many bytes are at stake.
Moreover, I don't see a way to prevent such leaks without changing the
API for TS template initialization functions: right now they do not
have to worry about making sure that their results are long-lived.
Hence, I think we should install a suppression rule rather than trying
to fix this completely. However, I did grab some low-hanging fruit:
several places were leaking the result of get_tsearch_config_filename.
This seems worth doing mostly because they are inconsistent with other
dictionaries that were freeing it already.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
PL/pgSQL and SQL-function parsing leak some stuff into the long-lived
function cache context. This isn't really a huge practical problem,
since it's not a large amount of data and the cruft will be recovered
if we have to re-parse the function. It's not clear that it's worth
working any harder than the previous patch did to eliminate these
leak complaints, so instead silence them with a suppression rule.
This suppression rule also hides the fact that CachedFunction structs
are intentionally leaked in some cases because we're unsure if any
fn_extra pointers remain. That might be nice to do something about
eventually, but it's not clear how.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
format_procedure leaks memory, so run it in a short-lived context
not the session-lifespan cache context for the PL/pgSQL function.
parse_datatype called the core parser in the function's cache context,
thus leaking potentially a lot of storage into that context. We were
also being a bit careless with the TypeName structures made in that
code path and others. Most of the time we don't need to retain the
TypeName, so make sure it is made in the short-lived temp context,
and copy it only if we do need to retain it.
These are far from the only leaks in PL/pgSQL compilation, but
they're the biggest as far as I've seen, and further improvement
looks like it'd require delicate and bug-prone surgery.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
These changes don't actually fix any leaks. They just make sure that
Valgrind will find pointers to data structures that remain allocated
at process exit, and thus not falsely complain about leaks. In
particular, we are trying to avoid situations where there is no
pointer to the beginning of an allocated block (except possibly
within the block itself, which Valgrind won't count).
* Because dynahash.c never frees hashtable storage except by deleting
the whole hashtable context, it doesn't bother to track the individual
blocks of elements allocated by element_alloc(). This results in
"possibly lost" complaints from Valgrind except when the first element
of each block is actively in use. (Otherwise it'll be on a freelist,
but very likely only reachable via "interior pointers" within element
blocks, which doesn't satisfy Valgrind.)
To fix, if we're building with USE_VALGRIND, expend an extra pointer's
worth of space in each element block so that we can chain them all
together from the HTAB header. Skip this in shared hashtables though:
Valgrind doesn't track those, and we'd need additional locking to make
it safe to manipulate a shared chain.
While here, update a comment obsoleted by 9c911ec06.
* Put the dlist_node fields of catctup and catclist structs first.
This ensures that the dlist pointers point to the starts of these
palloc blocks, and thus that Valgrind won't consider them
"possibly lost".
* The postmaster's PMChild structs and the autovac launcher's
avl_dbase structs also have the dlist_node-is-not-first problem,
but putting it first still wouldn't silence the warning because we
bulk-allocate those structs in an array, so that Valgrind sees a
single allocation. Commonly the first array element will be pointed
to only from some later element, so that the reference would be an
interior pointer even if it pointed to the array start. (This is the
same issue as for dynahash elements.) Since these are pretty simple
data structures, I don't feel too bad about faking out Valgrind by
just keeping a static pointer to the array start.
(This is all quite hacky, and it's not hard to imagine usages where
we'd need some other idea in order to have reasonable leak tracking of
structures that are only accessible via dlist_node lists. But these
changes seem to be enough to silence this class of leakage complaints
for the moment.)
* Free a couple of data structures manually near the end of an
autovacuum worker's run when USE_VALGRIND, and ensure that the final
vac_update_datfrozenxid() call is done in a non-permanent context.
This doesn't have any real effect on the process's total memory
consumption, since we're going to exit as soon as that last
transaction is done. But it does pacify Valgrind.
* Valgrind complains about the postmaster's socket-files and
lock-files lists being leaked, which we can silence by just
not nulling out the static pointers to them.
* Valgrind seems not to consider the global "environ" variable as
a valid root pointer; so when we allocate a new environment array,
it claims that data is leaked. To fix that, keep our own
statically-allocated copy of the pointer, similarly to the previous
item.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
In the current system architecture, none of these are worth obsessing
over; most are once-per-process leaks. However, Valgrind complains
about all of them, and if we get to using threads rather than
processes for backend sessions, it will become more interesting to
avoid per-session leaks.
* Fix leaks in StartupXLOG() and ShutdownWalRecovery().
* Fix leakage of pq_mq_handle in a parallel worker.
While at it, move mq_putmessage's "Assert(pq_mq_handle != NULL)"
to someplace where it's not trivially useless.
* Fix leak in logicalrep_worker_detach().
* Don't leak the startup-packet buffer in ProcessStartupPacket().
* Fix leak in evtcache.c's DecodeTextArrayToBitmapset().
If the presented array is toasted, this neglected to free the
detoasted copy, which was then leaked into EventTriggerCacheContext.
* I'm distressed by the amount of code that BuildEventTriggerCache
is willing to run while switched into a long-lived cache context.
Although the detoasted array is the only leak that Valgrind reports,
let's tighten things up while we're here. (DecodeTextArrayToBitmapset
is still run in the cache context, so doing this doesn't remove the
need for the detoast fix. But it reduces the surface area for other
leaks.)
* load_domaintype_info() intentionally leaked some intermediate cruft
into the long-lived DomainConstraintCache's memory context, reasoning
that the amount of leakage will typically not be much so it's not
worth doing a copyObject() of the final tree to avoid that. But
Valgrind knows nothing of engineering tradeoffs and complains anyway.
On the whole, the copyObject doesn't cost that much and this is surely
not a performance-critical code path, so let's do it the clean way.
* MarkGUCPrefixReserved didn't bother to clean up removed placeholder
GUCs at all, which shows up as a leak in one regression test.
It seems appropriate for it to do as much cleanup as
define_custom_variable does when replacing placeholders, so factor
that code out into a helper function. define_custom_variable's logic
was one brick shy of a load too: it forgot to free the separate
allocation for the placeholder's name.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
Arrange that only the "aligned chunk" part of the allocated space is
included in a Valgrind vchunk. This suppresses complaints about that
vchunk being possibly lost because PG is retaining only pointers to
the aligned chunk. Also make sure that trailing wasted space is
marked NOACCESS.
As a tiny performance improvement, arrange that MCXT_ALLOC_ZERO zeroes
only the returned "aligned chunk", not the wasted padding space.
In passing, fix GetLocalBufferStorage to use MemoryContextAllocAligned
instead of rolling its own implementation, which was equally broken
according to Valgrind.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
When determining whether an allocated chunk is still reachable,
Valgrind will consider only pointers within what it believes to be
allocated chunks. Normally, all of a block obtained from malloc()
would be considered "allocated" --- but it turns out that if we use
VALGRIND_MEMPOOL_ALLOC to designate sub-section(s) of a malloc'ed
block as allocated, all the rest of that malloc'ed block is ignored.
This leads to lots of false positives of course. In particular,
in any multi-malloc-block context, all but the primary block were
reported as leaked. We also had a problem with context "ident"
strings, which were reported as leaked unless there was some other
pointer to them besides the one in the context header.
To fix, we need to use VALGRIND_MEMPOOL_ALLOC to designate
a context's management structs (the context struct itself and
any per-block headers) as allocated chunks. That forces moving
the VALGRIND_CREATE_MEMPOOL/VALGRIND_DESTROY_MEMPOOL calls into
the per-context-type code, so that the pool identifier can be
made as soon as we've allocated the initial block, but otherwise
it's fairly straightforward. Note that in Valgrind's eyes there
is no distinction between these allocations and the allocations
that the mmgr modules hand out to user code. That's fine for
now, but perhaps someday we'll want to do better yet.
When reading this patch, it's helpful to start with the comments
added at the head of mcxt.c.
Author: Andres Freund <andres@anarazel.de>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/285483.1746756246@sss.pgh.pa.us
Discussion: https://postgr.es/m/20210317181531.7oggpqevzz6bka3g@alap3.anarazel.de
Previously, when running pgbench in pipeline mode with a custom script
that triggered retriable errors (e.g., serialization errors),
an assertion failure could occur:
Assertion failed: (res == ((void*)0)), function discardUntilSync, file pgbench.c, line 3515.
The root cause was that pgbench incorrectly assumed only a single
pipeline sync message would be received at the end. In reality,
multiple pipeline sync messages can be sent and must be handled properly.
This commit fixes the issue by updating pgbench to correctly process
multiple pipeline sync messages, preventing the assertion failure.
Back-patch to v15, where the bug was introduced.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Tatsuo Ishii <ishii@postgresql.org>
Discussion: https://postgr.es/m/CAHGQGwFAX56Tfx+1ppo431OSWiLLuW72HaGzZ39NkLkop6bMzQ@mail.gmail.com
Backpatch-through: 15
It was not very clear that the triggers are only allowed on plain tables
(not foreign tables). Also, rephrase the documentation for better
readability.
Follow up to commit 9e6104c66.
Reported-by: Etsuro Fujita <etsuro.fujita@gmail.com>
Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Etsuro Fujita <etsuro.fujita@gmail.com>
Discussion: https://postgr.es/m/CAPmGK16XBs9ptNr8Lk4f-tJZogf6y-Prz%3D8yhvJbb_4dpsc3mQ%40mail.gmail.com
Backpatch-through: 13
In ReorderBufferProcessTXN(), used to send the data of a transaction to
an output plugin, INSERT ON CONFLICT changes (INTERNAL_SPEC_INSERT) are
delayed until a confirmation record arrives (INTERNAL_SPEC_CONFIRM),
updating the change being processed.
8c58624df462 has added an extra step after processing a change to update
the progress of the transaction, by calling the callback
update_progress_txn() based on the LSN stored in a change after a
threshold of CHANGES_THRESHOLD (100) is reached. This logic has missed
the fact that for an INSERT ON CONFLICT change the data is freed once
processed, hence update_progress_txn() could be called pointing to a LSN
value that's already been freed. This could result in random crashes,
depending on the workload.
Per discussion, this issue is fixed by reusing in update_progress_txn()
the LSN from the change processed found at the beginning of the loop,
meaning that for a INTERNAL_SPEC_CONFIRM change the progress is updated
using the LSN of the INTERNAL_SPEC_CONFIRM change, and not the LSN from
its INTERNAL_SPEC_INSERT change. This is actually more correct, as we
want to update the progress to point to the INTERNAL_SPEC_CONFIRM
change.
Masahiko Sawada has found a nice trick to reproduce the issue: hardcode
CHANGES_THRESHOLD at 1 and run test_decoding (test "ddl" being enough)
on an instance running valgrind. The bug has been analyzed by Ethan
Mertz, who also originally suggested the solution used in this patch.
Issue introduced by 8c58624df462, so backpatch down to v16.
Author: Ethan Mertz <ethan.mertz@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/aIsQqDZ7x4LAQ6u1@paquier.xyz
Backpatch-through: 16
Currently, ALTER DATABASE/ROLE/SYSTEM RESET [ALL] with an unknown
custom GUC with a prefix reserved by MarkGUCPrefixReserved() errors
(unless a superuser runs a RESET ALL variant). This is problematic
for cases such as an extension library upgrade that removes a GUC.
To fix, simply make sure the relevant code paths explicitly allow
it. Note that we require superuser or privileges on the parameter
to reset it. This is perhaps a bit more restrictive than is
necessary, but it's not clear whether further relaxing the
requirements is safe.
Oversight in commit 88103567cb. The ALTER SYSTEM fix is dependent
on commit 2d870b4aef, which first appeared in v17. Unfortunately,
back-patching that commit would introduce ABI breakage, and while
that breakage seems unlikely to bother anyone, it doesn't seem
worth the risk. Hence, the ALTER SYSTEM part of this commit is
omitted on v15 and v16.
Reported-by: Mert Alev <mert@futo.org>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Discussion: https://postgr.es/m/18964-ba09dea8c98fccd6%40postgresql.org
Backpatch-through: 15
PostgreSQL always sends the BackendKeyData message at connection
startup, but there are some third party backend implementations out
there that don't support cancellation, and don't send the message
[1]. While the protocol docs left it up for interpretation if that is
valid behavior, libpq in PostgreSQL 17 and below accepted it. It does
not seem like the libpq behavior was intentional though, since it did
so by sending CancelRequest messages with all zeros to such servers
(instead of returning an error or making the cancel a no-op).
In version 18 the behavior was changed to return an error when trying
to create the cancel object with PGgetCancel() or PGcancelCreate().
This was done without any discussion, as part of supporting different
lengths of cancel packets for the new 3.2 version of the protocol.
This commit changes the behavior of PGgetCancel() / PGcancel() once
more to only return an error when the cancel object is actually used
to send a cancellation, instead of when merely creating the object.
The reason to do so is that some clients [2] create a cancel object as
part of their connection creation logic (thus having the cancel object
ready for later when they need it), so if creating the cancel object
returns an error, the whole connection attempt fails. By delaying the
error, such clients will still be able to connect to the third party
backend implementations in question, but when actually trying to
cancel a query, the user will be notified that that is not possible
for the server that they are connected to.
This commit only changes the behavior of the older PGgetCancel() /
PQcancel() functions, not the more modern PQcancelCreate() family of
functions. I.e. PQcancelCreate() returns a failed connection object
(CONNECTION_BAD) if the server didn't send a cancellation key. Unlike
the old PQgetCancel() function, we're not aware of any clients in the
field that use PQcancelCreate() during connection startup in a way
that would prevent connecting to such servers.
[1] AWS RDS Proxy is definitely one of them, and CockroachDB might be
another.
[2] psycopg2 (but not psycopg3).
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Backpatch-through: 18
Discussion: https://www.postgresql.org/message-id/20250617.101056.1437027795118961504.ishii%40postgresql.org
A deadlock can occur when the DDL command and the apply worker acquire
catalog locks in different orders while dropping replication origins.
The issue is rare in PG16 and higher branches because, in most cases, the
tablesync worker performs the origin drop in those branches, and its
locking sequence does not conflict with DDL operations.
This patch ensures consistent lock acquisition to prevent such deadlocks.
As per buildfarm.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Ajin Cherian <itsajin@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 14, where it was introduced
Discussion: https://postgr.es/m/bab95e12-6cc5-4ebb-80a8-3e41956aa297@gmail.com
Commit c407d5426b87 added tab completion for ALTER ROLE|USER ... RESET,
with the intent to offer only the variables actually set on the role.
But as soon as the user started typing something, it would start to
offer all possible matching variables.
Fix this the same way ALTER DATABASE ... RESET does it, i.e. by
properly considering the prefix.
A second issue causing similar symptoms (offering variables that are not
actually set for a role) was caused by a match to another pattern. The
ALTER DATABASE ... RESET was already excluded, so do the same thing for
ROLE/USER.
Report and fix by Dagfinn Ilmari Mannsåker. Backpatch to 18, same as
c407d5426b87.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/87qzyghw2x.fsf%40wibble.ilmari.org
Discussion: https://postgr.es/m/87tt4lumqz.fsf%40wibble.ilmari.org
Backpatch-through: 18
Commit 9df8727c5067 failed to schema-quality the unnest() call in the
query used to list the variables in ALTER DATABASE ... RESET. If there's
another unnest() function in the search_path, this could cause either
failures, or even security issues (when the tab-completion gets used by
privileged accounts).
Report and fix by Dagfinn Ilmari Mannsåker. Backpatch to 18, same as
9df8727c5067.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/87qzyghw2x.fsf%40wibble.ilmari.org
Discussion: https://postgr.es/m/87tt4lumqz.fsf%40wibble.ilmari.org
Backpatch-through: 18
pg_dump sorts objects by their logical names, e.g. (nspname, relname,
tgname), before dependency-driven reordering. That removes one source
of logically-identical databases differing in their schema-only dumps.
In other words, it helps with schema diffing. The logical name sort
ignored essential sort keys for constraints, operators, PUBLICATION
... FOR TABLE, PUBLICATION ... FOR TABLES IN SCHEMA, operator classes,
and operator families. pg_dump's sort then depended on object OID,
yielding spurious schema diffs. After this change, OIDs affect dump
order only in the event of catalog corruption. While pg_dump also
wrongly ignored pg_collation.collencoding, CREATE COLLATION restrictions
have been keeping that imperceptible in practical use.
Use techniques like we use for object types already having full sort key
coverage. Where the pertinent queries weren't fetching the ignored sort
keys, this adds columns to those queries and stores those keys in memory
for the long term.
The ignorance of sort keys became more problematic when commit
172259afb563d35001410dc6daad78b250924038 added a schema diff test
sensitive to it. Buildfarm member hippopotamus witnessed that.
However, dump order stability isn't a new goal, and this might avoid
other dump comparison failures. Hence, back-patch to v13 (all supported
versions).
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20250707192654.9e.nmisch@google.com
Backpatch-through: 13
This patch adds two new counters to pg_stat_statements:
- generic_plan_calls
- custom_plan_calls
These counters track how many times a prepared statement was executed
using a generic or custom plan, respectively, providing a global
equivalent at query level, for top and non-top levels, of
pg_prepared_statements whose data is restricted to a single session.
This commit builds upon e125e360020a. The module is bumped to version
1.13. PGSS_FILE_HEADER is bumped as well, something that the latest
patches touching the on-disk format of the PGSS file did not actually
bother with since 2022..
Author: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Ilia Evdokimov <ilya.evdokimov@tantorlabs.com>
Reviewed-by: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Nikolay Samokhvalov <nik@postgres.ai>
Discussion: https://postgr.es/m/CAA5RZ0uFw8Y9GCFvafhC=OA8NnMqVZyzXPfv_EePOt+iv1T-qQ@mail.gmail.com
Commit 719dcf3c42 introduced a field called CachedPlanType in
PlannedStmt to allow extensions to determine whether a cached plan is
generic or custom.
After discussion, the concepts that we want to track are a bit wider
than initially anticipated, as it is closer to knowing from which
"source" or "origin" a PlannedStmt has been generated or retrieved.
Custom and generic cached plans are a subset of that.
Based on the state of HEAD, we have been able to define two more
origins:
- "standard", for the case where PlannedStmt is generated in
standard_planner(), the most common case.
- "internal", for the fake PlannedStmt generated internally by some
query patterns.
This could be tuned in the future depending on what is needed. This
looks like a good starting point, at least. The default value is called
"UNKNOWN", provided as fallback value. This value is not used in the
core code, the idea is to let extensions building their own PlannedStmts
know about this new field.
Author: Michael Paquier <michael@paquier.xyz>
Co-authored-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/aILaHupXbIGgF2wJ@paquier.xyz
The sentence in question gave readers the impression that vacuumdb
removes statistics for a period of time while analyzing, but it's
actually meant to convey that --analyze-in-stages temporarily
replaces existing statistics with ones generated with lower
statistics targets.
Reported-by: Frédéric Yhuel <frederic.yhuel@dalibo.com>
Reviewed-by: Frédéric Yhuel <frederic.yhuel@dalibo.com>
Reviewed-by: "David G. Johnston" <david.g.johnston@gmail.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Discussion: https://postgr.es/m/4b94ca16-7a6d-4581-b2aa-4ea79dbc082a%40dalibo.com
Backpatch-through: 18
Presently, pg_upgrade assumes that all non-default tablespaces
don't move to different directories during upgrade. Unfortunately,
this isn't true for in-place tablespaces, which move to the new
cluster's pg_tblspc directory. This commit teaches pg_upgrade to
handle in-place tablespaces by retrieving the tablespace
directories for both the old and new clusters. In turn, we can
relax the prohibition on non-default tablespaces for same-version
upgrades, i.e., if all non-default tablespaces are in-place,
pg_upgrade may proceed.
This change is primarily intended to enable additional pg_upgrade
testing with non-default tablespaces, as is done in
006_transfer_modes.pl.
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aA_uBLYMUs5D66Nb%40nathan
Recent discussions of the mechanisms used to manage global data have
raised concerns about their robustness and security. Rather than try
to deal with those concerns at a very late stage of the release cycle,
the conclusion is to revert these features and work on them for the
next release.
This reverts parts or all of the following commits:
1495eff7bdb Non text modes for pg_dumpall, correspondingly change pg_restore
5db3bf7391d Clean up from commit 1495eff7bdb
289f74d0cb2 Add more TAP tests for pg_dumpall
2ef57908067 Fix a couple of error messages and tests for them
b52a4a5f285 Clean up error messages from 1495eff7bdb
4170298b6ec Further cleanup for directory creation on pg_dump/pg_dumpall
22cb6d28950 Fix memory leak in pg_restore.c
928394b664b Improve various new-to-v18 appendStringInfo calls
39729ec01d2 Fix fat fingering in 22cb6d28950
5822bf21d50 Add missing space in pg_restore documentation.
f09088a01d3 Free memory properly in pg_restore.c
40b9c27014d pg_restore cleanups
4aad2cb7707 Portability fix: isdigit() must be passed an unsigned char.
88e947136b4 Fix typos and grammar in the code
f60420cff66 doc: Alphabetize long options for pg_dump[all].
bc35adee8d7 doc: Put new options in consistent order on man pages
a876464abc7 Message style improvements
dec6643487b Improve pg_dump/pg_dumpall help synopses and terminology
0ebd2425558 Run pgperltidy
Discussion: https://postgr.es/m/20250708212819.09.nmisch@google.com
Backpatch-to: 18
Reviewed-by: Noah Misch <noah@leadboat.com>
The configure checks used two incorrect functions when checking the
presence of some routines in an environment:
- __get_cpuidex() for the check of __cpuidex().
- __get_cpuid() for the check of __cpuid().
This means that Postgres has never been able to detect the presence of
these functions, impacting environments where these exist, like Windows.
Simply fixing the function name does not work. For example, using
configure with MinGW on Windows causes the checks to detect all four of
__get_cpuid(), __get_cpuid_count(), __cpuidex() and __cpuid() to be
available, causing a compilation failure as this messes up with the
MinGW headers as we would include both <intrin.h> and <cpuid.h>.
The Postgres code expects only one in { __get_cpuid() , __cpuid() } and
one in { __get_cpuid_count() , __cpuidex() } to exist. This commit
reshapes the configure checks to do exactly what meson is doing, which
has been working well for us: check one, then the other, but never allow
both to be detected in a given build.
The logic is wrong since 3dc2d62d0486 and 792752af4eb5 where these
checks have been introduced (the second case is most likely a copy-pasto
coming from the first case), with meson documenting that the configure
checks were broken. As far as I can see, they are not once applied
consistently with what the code expects, but let's see if the buildfarm
has different something to say. The comment in meson.build is adjusted
as well, to reflect the new reality.
Author: Lukas Fittl <lukas@fittl.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aIgwNYGVt5aRAqTJ@paquier.xyz
Backpatch-through: 13
If the client sent a query cancel request with backend PID 0, it
tripped an assertion. With assertions disabled, you got this in the
log instead:
LOG: invalid cancel request with PID 0
LOG: wrong key in cancel request for process 0
Query cancellations don't even require authentication, so we better
tolerate bogus requests. Fix by turning the assertion into a regular
runtime check.
Spotted while testing libpq behavior with a modified server that
didn't send BackendKeyData to the client.
Backpatch-through: 18
For many optional libraries, we extract the -L and -l switches needed
to link the library from a helper program such as llvm-config. In
some cases we put the resulting -L switches into LDFLAGS ahead of
-L switches specified via --with-libraries. That risks breaking
the user's intention for --with-libraries.
It's not such a problem if the library's -L switch points to a
directory containing only that library, but on some platforms a
library helper may "helpfully" offer a switch such as -L/usr/lib
that points to a directory holding all standard libraries. If the
user specified --with-libraries in hopes of overriding the standard
build of some library, the -L/usr/lib switch prevents that from
happening since it will come before the user-specified directory.
To fix, avoid inserting these switches directly into LDFLAGS during
configure, instead adding them to LIBDIRS or SHLIB_LINK. They will
still eventually get added to LDFLAGS, but only after the switches
coming from --with-libraries.
The same problem exists for -I switches: those coming from
--with-includes should appear before any coming from helper programs
such as llvm-config. We have not heard field complaints about this
case, but it seems certain that a user attempting to override a
standard library could have issues.
The changes for this go well beyond configure itself, however,
because many Makefiles have occasion to manipulate CPPFLAGS to
insert locally-desirable -I switches, and some of them got it wrong.
The correct ordering is any -I switches pointing at within-the-
source-tree-or-build-tree directories, then those from the tree-wide
CPPFLAGS, then those from helper programs. There were several places
that risked pulling in a system-supplied copy of libpq headers, for
example, instead of the in-tree files. (Commit cb36f8ec2 fixed one
instance of that a few months ago, but this exercise found more.)
The Meson build scripts may or may not have any comparable problems,
but I'll leave it to someone else to investigate that.
Reported-by: Charles Samborski <demurgos@demurgos.net>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/70f2155f-27ca-4534-b33d-7750e20633d7@demurgos.net
Backpatch-through: 13
When I prepared 71c0921b6 et al yesterday, I was thinking that the
logic involving explicitly freeing the node_list output was still
needed to dodge leakage bugs in libxml2. But I was misremembering:
we introduced that only because with early 2.13.x releases we could
not trust xmlParseBalancedChunkMemory's result code, so we had to
look to see if a node list was returned or not. There's no reason
to believe that xmlParseBalancedChunkMemory will fail to clean up
the node list when required, so simplify. (This essentially
completes reverting all the non-cosmetic changes in 6082b3d5d.)
Reported-by: Jim Jones <jim.jones@uni-muenster.de>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/997668.1753802857@sss.pgh.pa.us
Backpatch-through: 13
pgfdw_report_error has the same design fault as elog/ereport
do, namely that it might or might not return depending on elevel.
While those functions are too widely used to redesign, there are
only about 30 call sites for pgfdw_report_error, and it's not
exposed for extension use. So let's rethink it. Split it into
pgfdw_report_error() which hard-wires ERROR elevel and is marked
pg_noreturn, and pgfdw_report() which allows only elevels less
than ERROR. (Thanks to Álvaro Herrera for suggesting this naming.)
The motivation for doing this now is that in the wake of commit
80aa9848b, which removed a bunch of PG_TRYs from postgres_fdw,
we're seeing more thorough flow analysis there from C compilers
and Coverity. Marking pgfdw_report_error as noreturn where
appropriate should help prevent false-positive complaints.
We could alternatively have invented a macro wrapper similar
to what we use for elog/ereport, but that code is sufficiently
fragile that I didn't find it appetizing to make another copy.
Since 80aa9848b already changed pgfdw_report_error's signature,
this won't make back-patching any harder than it was already.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/420221.1753714491@sss.pgh.pa.us
In the wake of commit 80aa9848b, a few compilers think that
postgresAcquireSampleRowsFunc's "reltuples" might be used
uninitialized. The logic is visibly correct, both before
and after that change; presumably what happened here is that
the previous presence of a setjmp() in the function stopped
them from attempting any flow analysis at all. Add a dummy
initialization to silence the warning.
Reported-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAExHW5tkerCufA_F6oct5dMJ61N+yVrVgYXL7M8dD-5_zXjrDw@mail.gmail.com
Previously, if a background worker crashed and the server restarted
with restart_after_crash enabled, the worker was not restarted
as expected. This issue was fixed by commit b5d084c5353,
which ensures that background workers without the never-restart flag
are correctly restarted after a crash-and-restart cycle.
To guard against regressions, this commit adds a test that verifies
a background worker successfully restarts in such a scenario.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: ChangAo Chen <cca5507@qq.com>
Discussion: https://postgr.es/m/CAHGQGwHF-PdUOgiXCH_8K5qBm8b13h0Qt=dSoFXZybXQdbf-tw@mail.gmail.com
This commit adds an extra --timeout=PG_TEST_TIMEOUT_DEFAULT to the call
of pg_isready done in is_alive(), so as it is possible to have more
leverage with the call on machines constrained on resources.
By default the timeout is 180s, and it can be changed depending on the
environment where the tests are run.
Per buildfarm member mamba, where the default timeout of 3s used by
pg_isready has proved that it may not be enough as the postmaster may
not have the time it needs to reply to a ping request.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/29b637df-f818-4b52-986a-f11ba28300e9@gmail.com
This commit documents differences in the definition of word separators for
the initcap function between libc and ICU locale providers.
Backpatch to all supported branches.
Discussion: https://postgr.es/m/804cc10ef95d4d3b298e76b181fd9437%40postgrespro.ru
Author: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Backpatch-through: 13
There've been a few complaints that it can be overly difficult to figure
out why the planner picked a Memoize plan. To help address that, here we
adjust the EXPLAIN output to display the following additional details:
1) The estimated number of cache entries that can be stored at once
2) The estimated number of unique lookup keys that we expect to see
3) The number of lookups we expect
4) The estimated hit ratio
Technically #4 can be calculated using #1, #2 and #3, but it's not a
particularly obvious calculation, so we opt to display it explicitly.
The original patch by Lukas Fittl only displayed the hit ratio, but
there was a fear that might lead to more questions about how that was
calculated. The idea with displaying all 4 is to be transparent which
may allow queries to be tuned more easily. For example, if #2 isn't
correct then maybe extended statistics or a manual n_distinct estimate can
be used to help fix poor plan choices.
Author: Ilia Evdokimov <ilya.evdokimov@tantorlabs.com>
Author: Lukas Fittl <lukas@fittl.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CAP53Pky29GWAVVk3oBgKBDqhND0BRBN6yTPeguV_qSivFL5N_g%40mail.gmail.com
This mostly reverts commit 6082b3d5d, "Use xmlParseInNodeContext
not xmlParseBalancedChunkMemory". It turns out that
xmlParseInNodeContext will reject text chunks exceeding 10MB, while
(in most libxml2 versions) xmlParseBalancedChunkMemory will not.
The bleeding-edge libxml2 bug that we needed to work around a year
ago is presumably no longer a factor, and the argument that
xmlParseBalancedChunkMemory is semi-deprecated is not enough to
justify a functionality regression. Hence, go back to doing it
the old way.
Reported-by: Michael Paquier <michael@paquier.xyz>
Author: Michael Paquier <michael@paquier.xyz>
Co-authored-by: Erik Wienhold <ewie@ewie.name>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aIGknLuc8b8ega2X@paquier.xyz
Backpatch-through: 13
Commit ffae5cc5a6024b4e25ec920ed5c4dfac649605f8 added this hint in 2006,
but it's now obsolete and doesn't reflect what users should really check
in this situation. We were not able to agree on a new hint, so just delete
the existing one and update the comments to mention one possibility that
is known to cause problems of this kind: something other than PostgreSQL
is modifying files in the PostgreSQL data directory.
Author: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Robert Haas <rhaas@postgresql.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/CAKZiRmxNbcaL76x=09Sxf7aUmrRQJBf8drzDdUHo+j9_eM+VMg@mail.gmail.com
Commit 473a575e05979b4dbb28b3f2544f4ec8f184ce65 purported to make this
function stash the error message in *syncrep_parse_result_p, but
it didn't actually.
As a result, an attempt to set synchronous_standby_names to any value
that does not parse resulted in a generic "parser failed." message
rather than anything more specific. This fixes that.
Discussion: http://postgr.es/m/CA+TgmoYF9wPNZ-Q_EMfib_espgHycY-eX__6Tzo2GpYpVXqCdQ@mail.gmail.com
Backpatch-through: 18
This routines includes three code paths that can fail, with the
allocated prepared statement name going out of scope.
Per report from Coverity. Oversight in commit a6eabec6808c, that has
played with the order of some ecpg_strdup() calls in this code path.
The callback added in fc415edf8ca8 used to check if there is any pending
data to flush for fixed-numbered statistics, done by looping across all
the builtin and custom stats kinds with a call to have_fixed_pending_cb,
is proving to able to show in workloads that do not report any stats
(read-only, no function calls, no WAL, no IO, etc). The code used in
v17 was cheaper than that what HEAD has introduced, relying on three
boolean checks for WAL, SLRU and IO stats.
This commit switches the code to use a more efficient approach than
fc415edf8ca8, with a single boolean flag that can be switched to "true"
by any fixed-numbered stats kinds to force pgstat_report_stat() to go
through one round of reports. The flag is reset by pgstat_report_stat()
once a full round of reports is done. The flag being false means that
fixed-numbered stats kinds saw no activity, and that there is no pending
data to flush.
ac000fca743e took one step in improving the performance by reducing the
number of stats kinds that the backend can hold. This commit takes a
more drastic step by bringing back the code efficiency to what it was
before v18 with a cheap check at the beginning of pgstat_report_stat()
for its fast-exit path.
The callback have_static_pending_cb is removed as an effect of all that.
Reported-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/eb224uegsga2hgq7dfq3ps5cduhpqej7ir2hjxzzozjthrekx5@dysei6buqthe
Backpatch-through: 18
If the number of sync requests is big enough, the palloc() call in
AbsorbSyncRequests() will attempt to allocate more than 1 GB of memory,
resulting in failure. This can lead to an infinite loop in the checkpointer
process, as it repeatedly fails to absorb the pending requests.
This commit introduces the following changes to cope with this problem:
1. Turn pending checkpointer requests array in shared memory into a bounded
ring buffer.
2. Limit maximum ring buffer size to 10M items.
3. Make AbsorbSyncRequests() process requests incrementally in 10K batches.
Even #2 makes the whole queue size fit the maximum palloc() size of 1 GB.
of continuous lock holding.
This commit is for master only. Simpler fix, which just limits a request
queue size to 10M, will be backpatched.
Reported-by: Ekaterina Sokolova <e.sokolova@postgrespro.ru>
Discussion: https://postgr.es/m/db4534f83a22a29ab5ee2566ad86ca92%40postgrespro.ru
Author: Maxim Orlov <orlovmg@gmail.com>
Co-authored-by: Xuneng Zhou <xunengzhou@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Similar checks are done for the mandatory table AM callbacks. A portion
of the index AM callbacks are optional and can be NULL; the rest is
mandatory and is documented as such in the documentation and in amapi.h.
These checks are useful to detect quickly if all the mandatory callbacks
are defined when implementing a new index access method, as the
assertions are run when loading the AM.
Author: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/ME0P300MB0445795D31CEAB92C58B41FDB651A@ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
Valgrind complains that the PQconninfoOption array returned by libpq
is leaked. We apparently believed that we could suppress that warning
by storing that array's address in a static variable. However, modern
C compilers are bright enough to optimize the static variable away.
We could escalate that arms race by making the variable global.
But on the whole it seems better to revise the code so that it
can free libpq's result properly. The only thing that costs
us is copying the parameter-name keywords; which seems like a
pretty negligible cost in a function that runs at most once per
process.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/2976982.1748049023@sss.pgh.pa.us
Remove a bunch of PG_TRY constructs, de-volatilize related
variables, remove some PQclear calls in error paths.
Aside from making the code simpler and shorter, this should
provide some marginal performance gains.
For ease of review, I did not re-indent code within the removed
PG_TRY constructs. That'll be done in a separate patch.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/2976982.1748049023@sss.pgh.pa.us
Commit 232d8caea fixed a case where postgres_fdw could lose track
of a PGresult object, resulting in a process-lifespan memory leak.
But I have little faith that there aren't other potential PGresult
leakages, now or in future, in the backend modules that use libpq.
Therefore, this patch proposes infrastructure that makes all
PGresults returned from libpq act as though they are palloc'd
in the CurrentMemoryContext (with the option to relocate them to
another context later). This should greatly reduce the risk of
careless leaks, and it also permits removal of a bunch of code
that attempted to prevent such leaks via PG_TRY blocks.
This patch adds infrastructure that wraps each PGresult in a
"libpqsrv_PGresult" that provides a memory context reset callback
to PQclear the PGresult. Code using this abstraction is inherently
memory-safe to the same extent as we are accustomed to in most backend
code. Furthermore, we add some macros that automatically redirect
calls of the libpq functions concerned with PGresults to use this
infrastructure, so that almost no source-code changes are needed to
wheel this infrastructure into place in all the backend code that
uses libpq.
Perhaps in future we could create similar infrastructure for
PGconn objects, but there seems less need for that.
This patch just creates the infrastructure and makes relevant code
use it, including reverting 232d8caea in favor of this mechanism.
A good deal of follow-on simplification is possible now that we don't
have to be so cautious about freeing PGresults, but I'll put that in
a separate patch.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://postgr.es/m/2976982.1748049023@sss.pgh.pa.us
This flag was effectively a no-op in EXEC_BACKEND (ie, Windows)
builds, because it was kept in the process-local HTAB struct,
and it could only ever become set in the postmaster's copy.
The simplest fix is to move it to the shared HASHHDR struct.
We could keep a copy in HTAB as well, as we do with keysize
and some other fields, but the "too much contention" argument
doesn't seem to apply here: we only examine isfixed during
element_alloc(), which had better not get hit very often for
a shared hashtable.
This oversight dates to 7c797e719 which invented the option.
But back-patching doesn't seem appropriate given the lack of
field complaints. If there is anyone running an affected
workload on Windows, they might be unhappy about the behavior
changing in a minor release.
Author: Aidar Imamov <a.imamov@postgrespro.ru>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/4d0cb35ff01c5c74d2b9a582ecb73823@postgrespro.ru
This changes the grammar for REINDEX, CHECKPOINT, CLUSTER, ANALYZE/ANALYSE;
they still accept the same options as before, but the grammar is written
differently for convenience of future development.
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/202507231538.ir7pjzoow6oe@alvherre.pgsql
Previously, if a background worker crashed (e.g., due to a SIGKILL) and
the server restarted due to restart_after_crash being enabled,
the worker was not restarted as expected. Background workers without
the never-restart flag should automatically restart in this case.
This issue was introduced in commit 28a520c0b77, which failed to reset
the rw_pid field in the RegisteredBgWorker struct for the crashed worker.
This commit fixes the problem by resetting rw_pid for all eligible
background workers during the crash-and-restart cycle.
Back-patched to v18, where the bug was introduced.
Bug fix patches were proposed by Andrey Rudometov and ChangAo Chen,
but this commit uses a different approach.
Reported-by: Andrey Rudometov <unlimitedhikari@gmail.com>
Reported-by: ChangAo Chen <cca5507@qq.com>
Author: Andrey Rudometov <unlimitedhikari@gmail.com>
Author: ChangAo Chen <cca5507@qq.com>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: ChangAo Chen <cca5507@qq.com>
Reviewed-by: Shveta Malik <shveta.malik@gmail.com>
Discussion: https://postgr.es/m/CAF6JsWiO=i24qYitWe6ns1sXqcL86rYxdyU+pNYk-WueKPSySg@mail.gmail.com
Discussion: https://postgr.es/m/tencent_E00A056B3953EE6440F0F40F80EC30427D09@qq.com
Backpatch-through: 18
LatchWaitSetPostmasterDeathPos, the latch event position for the
postmaster death event, is initialized under IsUnderPostmaster.
WaitLatch() considered it as a valid wait target in single-user mode
(!IsUnderPostmaster), which was incorrect.
One code path found to fail with an assertion failure is a database drop
in single-user mode while waiting in WaitForProcSignalBarrier() after
the drop.
Oversight in commit 84e5b2f07a5e.
Author: Patrick Stählin <me@packi.ch>
Co-authored-by: Ronan Dunklau <ronan.dunklau@aiven.io>
Discussion: https://postgr.es/m/18996-3a2744c8140488de@postgresql.org
Backpatch-through: 18
This commit changes stats kinds to have the following bounds, making
their handling in core cheaper by default:
- PGSTAT_KIND_CUSTOM_MIN 128 -> 24
- PGSTAT_KIND_MAX 256 -> 32
The original numbers were rather high, and showed an impact on
performance in pgstat_report_stat() for the case of simple queries with
its early-exit path if there are no pending statistics to flush. This
logic will be improved more in a follow-up commit to bring the
performance of pgstat_report_stat() on par with v17 and older versions.
Lowering the bounds is a change worth doing on its own, independently of
the other improvement.
These new numbers should be enough to leave some room for the following
years for built-in and custom stats kinds, with stable ID numbers. At
least that should be enough to start with this facility for extension
developers. It can be always increased in the tree depending on the
requirements wanted.
Per discussion with Andres Freund and Bertrand Drouvot.
Discussion: https://postgr.es/m/eb224uegsga2hgq7dfq3ps5cduhpqej7ir2hjxzzozjthrekx5@dysei6buqthe
Backpatch-through: 18
This function is declared as returning a uint8, but it returns a
bool in one code path. To fix, return (uint8) 0 instead of false
there. This should behave exactly the same as before, but it might
prevent future compiler complaints.
Oversight in commit a892234f83.
Author: Julien Rouhaud <rjuju123@gmail.com>
Discussion: https://postgr.es/m/aIHluT2isN58jqHV%40jrouhaud
Previously, the tool could replay the same transaction twice, once during
recovery, then again during replication after the subscriber was set up.
This occurred because the same recovery_target_lsn was used both to
finalize recovery and to start replication. If
recovery_target_inclusive = true, the transaction at that LSN would be
applied during recovery and then sent again by the publisher leading to
duplication.
To prevent this, we now set recovery_target_inclusive = false. This
ensures the transaction at recovery_target_lsn is not reapplied during
recovery, avoiding duplication when replication begins.
Bug #18897
Reported-by: Zane Duffield <duffieldzane@gmail.com>
Author: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 17, where it was introduced
Discussion: https://postgr.es/m/18897-d3db67535860dddb@postgresql.org
PlannedStmt gains a new field, called CachedPlanType, able to track if a
given plan tree originates from the cache and if we are dealing with a
generic or custom cached plan.
This field can be used for monitoring or statistical purposes, in the
executor hooks, for example, based on the planned statement attached to
a QueryDesc. A patch is under discussion for pg_stat_statements to
provide an equivalent of the counters in pg_prepared_statements for
custom and generic plans, to provide a more global view of such data, as
this data is now restricted to the current session.
The concept introduced in this commit is useful on its own, and has been
extracted from a larger patch by the same author.
Author: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAA5RZ0uFw8Y9GCFvafhC=OA8NnMqVZyzXPfv_EePOt+iv1T-qQ@mail.gmail.com
Ensure the test waits for the apply worker to exit after disabling the
subscription. This is necessary to safely enable the retain_dead_tuples
option. Also added a similar wait in another part of the test to prevent
unintended apply worker activity that could lead to test failures
post-subscription disable.
Reported by Michael Paquier as per cfbot.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Discussion: https://postgr.es/m/aIGLgfRJIBwExoPj@paquier.xyz
This commit adds missing index entries for the functions pg_buffercache_numa()
and pg_buffercache_usage_counts() in the pg_buffercache documentation.
It also makes the function titles consistent by adding parentheses after
function names where they were previously missing.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/7d19af4b-7da3-4862-9f52-ff958960bd8d@oss.nttdata.com
Backpatch-through: 18
Solaris has never bothered to add "const" to the second argument
of PAM conversation procs, as all other Unixen did decades ago.
This resulted in an "incompatible pointer" compiler warning when
building --with-pam, but had no more serious effect than that,
so we never did anything about it. However, as of GCC 14 the
case is an error not warning by default.
To complicate matters, recent OpenIndiana (and maybe illumos
in general?) *does* supply the "const" by default, so we can't
just assume that platforms using our solaris template need help.
What we can do, short of building a configure-time probe,
is to make solaris.h #define _PAM_LEGACY_NONCONST, which
causes OpenIndiana's pam_appl.h to revert to the traditional
definition, and hopefully will have no effect anywhere else.
Then we can use that same symbol to control whether we include
"const" in the declaration of pam_passwd_conv_proc().
Bug: #18995
Reported-by: Andrew Watkins <awatkins1966@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18995-82058da9ab4337a7@postgresql.org
Backpatch-through: 13
lwlock.c, lwlock.h, and wait_event_names.txt each contain a list of
built-in LWLock tranches. It is easy to miss one or the other when
adding or removing tranches, and discrepancies have adverse effects
(e.g., breaking JOINs between pg_stat_activity and pg_wait_events).
This commit moves the lists of built-in tranches in lwlock.{c,h} to
lwlocklist.h and adds a cross-check to the script that generates
lwlocknames.h. If the lists do not match exactly, building will
fail.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aHpOgwuFQfcFMZ/B%40ip-10-97-1-34.eu-west-3.compute.internal
Commit 70e81861fadd split out xlogrecovery.c/h and moved some enums
related to recovery targets to xlogrecovery.h. However, it seems that
the enum RecoveryTargetAction was inadvertently left out by that commit.
This commit moves it to xlogrecovery.h for consistency.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20240904.173013.1132986940678039655.horikyota.ntt@gmail.com
Logical replication requires reliable conflict detection to maintain data
consistency across nodes. To achieve this, we must prevent premature
removal of tuples deleted by other origins and their associated commit_ts
data by VACUUM, which could otherwise lead to incorrect conflict reporting
and resolution.
This patch introduces a mechanism to retain deleted tuples on the
subscriber during the application of concurrent transactions from remote
nodes. Retaining these tuples allows us to correctly ignore concurrent
updates to the same tuple. Without this, an UPDATE might be misinterpreted
as an INSERT during resolutions due to the absence of the original tuple.
Additionally, we ensure that origin metadata is not prematurely removed by
vacuum freeze, which is essential for detecting update_origin_differs and
delete_origin_differs conflicts.
To support this, a new replication slot named pg_conflict_detection is
created and maintained by the launcher on the subscriber. Each apply
worker tracks its own non-removable transaction ID, which the launcher
aggregates to determine the appropriate xmin for the slot, thereby
retaining necessary tuples.
Conflict information retention (deleted tuples and commit_ts) can be
enabled per subscription via the retain_conflict_info option. This is
disabled by default to avoid unnecessary overhead for configurations that
do not require conflict resolution or logging.
During upgrades, if any subscription on the old cluster has
retain_conflict_info enabled, a conflict detection slot will be created to
protect relevant tuples from deletion when the new cluster starts.
This is a foundational work to correctly detect update_deleted conflict
which will be done in a follow-up patch.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/OS0PR01MB5716BE80DAEB0EE2A6A5D1F5949D2@OS0PR01MB5716.jpnprd01.prod.outlook.com
Compilers such as gcc and clang seem to perform this rewrite
automatically when the lookup string is known at compile-time to contain
a single character. The MSVC compiler does not seem apply the same
optimization, and the code being adjusted here is within an #ifdef WIN32,
so it seems worth adjusting this with the assumption that strchr() will be
slightly more performant.
There are a couple more instances in contrib/fuzzystrmatch that this
commit could also have adjusted. After some discussion, we deemed those
not important enough to bother with.
Author: Dmitry Mityugov <d.mityugov@postgrespro.ru>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: David Rowley <drowleyml@gmail.com>
Discussion: https://postgr.es/m/9c1beea6c7a5e9fb6677f26620f1f257%40postgrespro.ru
Various code paths of the ECPG code did not check for memory allocation
failures, including the specific case where ecpg_strdup() considers a
NULL value given in input as a valid behavior. strdup() returning
itself NULL on failure, there was no way to make the difference between
what could be valid and what should fail.
With the different cases in mind, ecpg_strdup() is redesigned and gains
a new optional argument, giving its callers the possibility to
differentiate allocation failures and valid cases where the caller is
giving a NULL value in input. Most of the ECPG code does not expect a
NULL value, at the exception of ECPGget_desc() (setlocale) and
ECPGconnect(), like dbname being unspecified, with repeated strdup
calls.
The code is adapted to work with this new routine. Note the case of
ecpg_auto_prepare(), where the code order is switched so as we handle
failures with ecpg_strdup() before manipulating any cached data,
avoiding inconsistencies.
This class of failure is unlikely a problem in practice, so no backpatch
is done. Random OOM failures in ECPGconnect() could cause the driver to
connect to a different server than the one wanted by the caller, because
it could fallback to default values instead of the parameters defined
depending on the combinations of allocation failures and successes.
Author: Evgeniy Gorbanev <gorbanyoves@basealt.ru>
Co-authored-by: Aleksander Alekseev <aleksander@tigerdata.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/a6b193c1-6994-4d9c-9059-aca4aaf41ddd@basealt.ru
Commit 112faf1378e introduced a translation marker in libpq-be-fe-helpers.h,
but this caused build failures on some platforms—such as the one reported
by buildfarm member indri—due to linker issues with dblink. This is the same
problem previously addressed in commit 213c959a294.
To fix the issue, this commit removes the translation marker from
libpq-be-fe-helpers.h, following the approach used in 213c959a294.
It also removes the associated gettext_noop() calls added in commit
112faf1378e, as they are no longer needed.
While reviewing this, a gettext_noop() call was also found in
contrib/basic_archive. Since contrib modules don't support translation,
this call has been removed as well.
Per buildfarm member indri.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/0e6299d9-608a-4ffa-aeb1-40cb8a99000b@oss.nttdata.com
The definition of \dRp+ was modified in commit 7054186c4e. This patch
updates the column list and row filter examples to align with the revised
definition.
Author: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed by: Peter Smith <smithpb2250@gmail.com>
Backpatch-through: 18, where it was introduced
Discussion: https://postgr.es/m/CANhcyEUvqkSO6b9zi_fs_BBPEge5acj4mf8QKmq2TX-7axa7EQ@mail.gmail.com
This index AM callback has been introduced in c1ec02be1d79 and it is
optional, currently only being used by BRIN. Optional callbacks are
documented with NULL as possible value in amapi.h and indexam.sgml, but
this callback has missed this part of the description.
Reported-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/CAHut+PvgYcPmPDi1YdHMJY5upnyGRpc0N8pk1xNB11xDSBwNog@mail.gmail.com
Backpatch-through: 17
Previously, NOTICE, WARNING, and similar messages received from remote
servers over replication, postgres_fdw, or dblink connections were printed
directly to stderr on the local server (e.g., the subscriber). As a result,
these messages lacked log prefixes (e.g., timestamp), making them harder
to trace and correlate with other log entries.
This commit addresses the issue by introducing a custom notice receiver
for replication, postgres_fdw, and dblink connections. These messages
are now logged via ereport(), ensuring they appear in the logs with proper
formatting and context, which improves clarity and aids in debugging.
Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CALDaNm2xsHpWRtLm-VL_HJCsaE3+1Y_n-jDEAr3-suxVqc3xoQ@mail.gmail.com
ECPGconnect() caches established connections to the server, supporting
the case of a NULL connection name when a database name is not specified
by its caller.
A follow-up call to ECPGget_PGconn() to get an established connection
from the cached set with a non-NULL name could cause a NULL pointer
dereference if a NULL connection was listed in the cache and checked for
a match. At least two connections are necessary to reproduce the issue:
one with a NULL name and one with a non-NULL name.
Author: Aleksander Alekseev <aleksander@tigerdata.com>
Discussion: https://postgr.es/m/CAJ7c6TNvFTPUTZQuNAoqgzaSGz-iM4XR61D7vEj5PsQXwg2RyA@mail.gmail.com
Backpatch-through: 13
In commit b262ad440, we introduced an optimization that reduces an IS
[NOT] NULL qual on a NOT NULL column to constant true or constant
false, provided we can prove that the input expression of the NullTest
is not nullable by any outer joins or grouping sets. This deduction
happens quite late in the planner, during the distribution of quals to
rels in query_planner. However, this approach has some drawbacks: we
can't perform any further folding with the constant, and it turns out
to be prone to bugs.
Ideally, this deduction should happen during constant folding.
However, the per-relation information about which columns are defined
as NOT NULL is not available at that point. This information is
currently collected from catalogs when building RelOptInfos for base
or "other" relations.
This patch moves the collection of NOT NULL attribute information for
relations before pull_up_sublinks, storing it in a hash table keyed by
relation OID. It then uses this information to perform the NullTest
deduction for Vars during constant folding. This also makes it
possible to leverage this information to pull up NOT IN subqueries.
Note that this patch does not get rid of restriction_is_always_true
and restriction_is_always_false. Removing them would prevent us from
reducing some IS [NOT] NULL quals that we were previously able to
reduce, because (a) the self-join elimination may introduce new IS NOT
NULL quals after constant folding, and (b) if some outer joins are
converted to inner joins, previously irreducible NullTest quals may
become reducible.
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAMbWs4-bFJ1At4btk5wqbezdu8PLtQ3zv-aiaY3ry9Ymm=jgFQ@mail.gmail.com
There are several pieces of catalog information that need to be
retrieved for a relation during the early stage of planning. These
include relhassubclass, which is used to clear the inh flag if the
relation has no children, as well as a column's attgenerated and
default value, which are needed to expand virtual generated columns.
More such information may be required in the future.
Currently, these pieces of catalog data are collected in multiple
places, resulting in repeated table_open/table_close calls for each
relation in the rangetable. This patch centralizes the collection of
all required early-stage catalog information into a single loop over
the rangetable, allowing each relation to be opened and closed only
once.
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAMbWs4-bFJ1At4btk5wqbezdu8PLtQ3zv-aiaY3ry9Ymm=jgFQ@mail.gmail.com
Currently, we expand virtual generated columns after we have pulled up
any SubLinks within the query's quals. This ensures that the virtual
generated column references within SubLinks that should be transformed
into joins are correctly expanded. This approach works well and has
posed no issues.
In an upcoming patch, we plan to centralize the collection of catalog
information needed early in the planner. This will help avoid
repeated table_open/table_close calls for relations in the rangetable.
Since this information is required during sublink pull-up, we are
moving the expansion of virtual generated columns to occur beforehand.
To achieve this, if any EXISTS SubLinks can be pulled up, their
rangetables are processed just before pulling them up.
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAMbWs4-bFJ1At4btk5wqbezdu8PLtQ3zv-aiaY3ry9Ymm=jgFQ@mail.gmail.com
Commit e5da0fe3c22b introduced catalog entries for not-null constraints
on domains; but because commit b0e96f311985 (the original work for
catalogued not-null constraints on tables) forgot to teach pg_dump to
process the comments for them, this one also forgot. Add that now.
We also need to teach repairDependencyLoop() about the new type of
constraints being possible for domains.
Backpatch-through: 17
Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Reported-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxF-0bqVR=j4jonS6N2Ka6hHUpFyu3_3TWKNhOW_4yFSSg@mail.gmail.com
When pg_recvlogical receives a SIGHUP signal, it closes the current
output file and reopens a new one. This is useful since it allows us to
rotate the output file by renaming the current file and sending a SIGHUP.
This behavior was previously undocumented. This commit adds
the missing documentation.
Back-patch to all supported versions.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Shinya Kato <shinya11.kato@gmail.com>
Discussion: https://postgr.es/m/0977fc4f-1523-4ecd-8a0e-391af4976367@oss.nttdata.com
Backpatch-through: 13
The only practical effect of these changes is to avoid a useless
list_copy() operation when there is a single hashclause. That's
never going to make any noticeable performance difference, but
the code is arguably clearer this way, especially if we take the
opportunity to add some comments so that readers don't have to
reverse-engineer the usage of these local variables. Also add
some braces for better/more consistent style.
Author: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAHewXNnHBOO9NEa=NBDYOrwZL4oHu2NOcTYvqyNyWEswo8f5OQ@mail.gmail.com
This commit is only for HEAD and v18, where the test has been removed.
It also incorporates improvements below to stability and coverage of the
original test, which were already backpatched to v17.
- Add one pg_logical_emit_message() call to force the creation of a record
that spawns across two pages.
- Make the logic wait for the checkpoint completion.
Author: Alexander Korotkov <akorotkov@postgresql.org>
Co-authored-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Backpatch-through: 18
Currently, the comments in 047_checkpoint_physical_slot. It shows an
incomplete intention to wait for checkpoint completion before performing
an immediate database stop. However, an immediate node stop can occur both
before and after checkpoint completion. Both cases should work correctly.
But we would like the test to be more stable and deterministic. This is why
this commit makes this test explicitly wait for the checkpoint completion
log message.
Discussion: https://postgr.es/m/CAPpHfdurV-j_e0pb%3DUFENAy3tyzxfF%2ByHveNDNQk2gM82WBU5A%40mail.gmail.com
Discussion: https://postgr.es/m/aHXLep3OaX_vRTNQ%40paquier.xyz
Author: Alexander Korotkov <akorotkov@postgresql.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Backpatch-through: 17
If a crash occurs while writing a WAL record that spans multiple pages, the
recovery process marks the page with the XLP_FIRST_IS_OVERWRITE_CONTRECORD
flag. However, logical decoding currently attempts to read the full WAL
record based on its expected size before checking this flag, which can lead
to an infinite wait if the remaining data is never written (e.g., no activity
after crash).
This patch updates the logic first to read the page header and check for
the XLP_FIRST_IS_OVERWRITE_CONTRECORD flag before attempting to reconstruct
the full WAL record. If the flag is set, decoding correctly identifies
the record as incomplete and avoids waiting for WAL data that will never
arrive.
Discussion: https://postgr.es/m/CAAKRu_ZCOzQpEumLFgG_%2Biw3FTa%2BhJ4SRpxzaQBYxxM_ZAzWcA%40mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm34m36PDHzsU_GdcNXU0gLTfFY5rzh9GSQv%3Dw6B%2BQVNRQ%40mail.gmail.com
Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Backpatch-through: 13
This commit improves the recovery TAP test 027_stream_regress so as
regression diffs are printed only if both the primary and the standby
are still alive after the main regression test suite finishes, relying
on d4c9195eff41 to do the job.
Particularly, a crash of the primary could scribble the contents
reported with mostly useless data, as the diffs would refer to query
that failed to run, not necessarily the cause of the crash.
Suggested-by: Andres Freund <andres@anarazel.de>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/CAN55FZ1D6KXvjSs7YGsDeadqCxNF3UUhjRAfforzzP0k-cE=bA@mail.gmail.com
This new routine acts as a wrapper of pg_isready, that can be run on a
node to check its connection status. This will be used in a recovery
test in a follow-up commit.
Suggested-by: Andres Freund <andres@anarazel.de>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/CAN55FZ1D6KXvjSs7YGsDeadqCxNF3UUhjRAfforzzP0k-cE=bA@mail.gmail.com
Instead of laboriously computing the exact output length, use strlen
to get an upper bound cheaply. (This is still O(N) of course, but
the constant factor is a lot less.) This will typically result in
overallocating the output datum, but that's of little concern since
it's a short-lived allocation in just about all use-cases.
A simple microbenchmark showed about 40% speedup for long input
strings.
While here, make some cosmetic cleanups and add a test case that
covers the double-backslash code path in byteain and byteaout.
Author: Steven Niu <niushiji@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Stepan Neretin <slpmcf@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/ca315729-140b-426e-81a6-6cd5cfe7ecc5@gmail.com
Presently, pg_dump generates commands like
SELECT pg_catalog.lo_create('5432');
ALTER LARGE OBJECT 5432 OWNER TO alice;
GRANT SELECT ON LARGE OBJECT 5432 TO bob;
for each large object. This is particularly slow at restore time,
especially when there are tens or hundreds of millions of large
objects. From reports and personal experience, such slow restores
seem to be most painful when encountered during pg_upgrade. This
commit teaches pg_dump to instead dump pg_largeobject_metadata and
the corresponding pg_shdepend rows when in binary upgrade mode,
i.e., pg_dump now generates commands like
COPY pg_catalog.pg_largeobject_metadata (oid, lomowner, lomacl) FROM stdin;
5432 16384 {alice=rw/alice,bob=r/alice}
\.
COPY pg_catalog.pg_shdepend (dbid, classid, objid, objsubid, refclassid, refobjid, deptype) FROM stdin;
5 2613 5432 0 1260 16384 o
5 2613 5432 0 1260 16385 a
\.
Testing indicates the COPY approach can be significantly faster.
To do any better, we'd probably need to find a way to copy/link
pg_largeobject_metadata's files during pg_upgrade, which would be
limited to upgrades from >= v16 (since commit 7b378237aa changed
the storage format for aclitem, which is used for
pg_largeobject_metadata.lomacl).
Note that this change only applies to binary upgrade mode (i.e.,
dumps initiated by pg_upgrade) since it inserts rows directly into
catalogs. Also, this optimization can only be used for upgrades
from >= v12 because pg_largeobject_metadata was created WITH OIDS
in older versions, which prevents pg_dump from handling
pg_largeobject_metadata.oid properly. With some extra effort, it
might be possible to support upgrades from older versions, but the
added complexity didn't seem worth it to support versions that will
have been out-of-support for nearly 3 years by the time this change
is released.
Experienced hackers may remember that prior to v12, pg_upgrade
copied/linked pg_largeobject_metadata's files (see commit
12a53c732c). Besides the aforementioned storage format issues,
this approach failed to transfer the relevant pg_shdepend rows, and
pg_dump still had to generate an lo_create() command per large
object so that creating the dependent comments and security labels
worked. We could perhaps adopt a hybrid approach for upgrades from
v16 and newer (i.e., generate lo_create() commands for each large
object, copy/link pg_largeobject_metadata's files, and COPY the
relevant pg_shdepend rows), but further testing is needed.
Reported-by: Hannu Krosing <hannuk@google.com>
Suggested-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Hannu Krosing <hannuk@google.com>
Reviewed-by: Nitin Motiani <nitinmotiani@google.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAMT0RQSS-6qLH%2BzYsOeUbAYhop3wmQTkNmQpo5--QRDUR%2BqYmQ%40mail.gmail.com
If a MERGE inside a CTE attempts an UPDATE or DELETE on a table with
BEFORE ROW triggers, and a concurrent UPDATE or DELETE happens, the
merge code would fail (crashing in the case of an UPDATE action, and
potentially executing the wrong action for a DELETE action).
This is the same issue that 9321c79c86 attempted to fix, except now
for a MERGE inside a CTE. As noted in 9321c79c86, what needs to happen
is for the trigger code to exit early, returning the TM_Result and
TM_FailureData information to the merge code, if a concurrent
modification is detected, rather than attempting to do an EPQ
recheck. The merge code will then do its own rechecking, and rescan
the action list, potentially executing a different action in light of
the concurrent update. In particular, the trigger code must never call
ExecGetUpdateNewTuple() for MERGE, since that is bound to fail because
MERGE has its own per-action projection information.
Commit 9321c79c86 did this using estate->es_plannedstmt->commandType
in the trigger code to detect that a MERGE was being executed, which
is fine for a plain MERGE command, but does not work for a MERGE
inside a CTE. Fix by passing that information to the trigger code as
an additional parameter passed to ExecBRUpdateTriggers() and
ExecBRDeleteTriggers().
Back-patch as far as v17 only, since MERGE cannot appear inside a CTE
prior to that. Additionally, take care to preserve the trigger ABI in
v17 (though not in v18, which is still in beta).
Bug: #18986
Reported-by: Yaroslav Syrytsia <me@ys.lc>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/18986-e7a8aac3d339fa47@postgresql.org
Backpatch-through: 17
When using a prepared statement to select data from a PostgreSQL foreign
table (postgres_fdw) with the "field = ANY($1)" expression, the operation
is not pushed down when an implicit type case is applied, and a generic plan
is used. This commit resolves the issue by supporting the push-down of
ArrayCoerceExpr, which is used in this case. The support is quite
straightforward and similar to other nods, such as RelabelType.
Discussion: https://postgr.es/m/4f0cea802476d23c6e799512ffd17aff%40postgrespro.ru
Author: Alexander Pyhalov <a.pyhalov@postgrespro.ru>
Reviewed-by: Maxim Orlov <orlovmg@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
This is the documented behavior, and it worked that way before
v10. However, addition of the connhost[] array created cases
where conn->connhost[conn->whichhost].port is NULL. The rest
of libpq is careful to substitute DEF_PGPORT[_STR] for a null
or empty port string, but we failed to do so here, leading to
possibly returning NULL. As of v18 that causes psql's \conninfo
command to segfault. Older psql versions avoid that, but it's
pretty likely that other clients have trouble with this,
so we'd better back-patch the fix.
In stable branches, just revert to our historical behavior of
returning an empty string when there was no user-given port
specification. However, it seems substantially more useful and
indeed more correct to hand back DEF_PGPORT_STR in such cases,
so let's make v18 and master do that.
Author: Daniele Varrazzo <daniele.varrazzo@gmail.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CA+mi_8YTS8WPZPO0PAb2aaGLwHuQ0DEQRF0ZMnvWss4y9FwDYQ@mail.gmail.com
Backpatch-through: 13
We have an assertion to ensure that a command tag has been assigned by
the time we're done executing, but if we happen to execute a command
with no queries, the assertion would fail. Per discussion, rather than
contort things to get a tag assigned, just remove the assertion.
Oversight in 2f9661311b83. That commit also retained a comment that
explained logic that had been adjacent to it but diffused into various
places, leaving none apt to keep part of the comment. Remove that part,
and rewrite what remains for extra clarity.
Bug: #18984
Backpatch-through: 13
Reported-by: Aleksander Alekseev <aleksander@tigerdata.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/18984-0f4778a6599ac3ae@postgresql.org
This commit adds a note to the pg_overexplain page that describes
how to use it (LOAD, session_preload_libraries, or
shared_preload_libraries). The new text is mostly lifted from the
auto_explain page. We should probably consider centralizing this
information in the future.
While at it, add a missing "module" to the opening sentence.
Reviewed-by: "David G. Johnston" <david.g.johnston@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/aHVWKM8l8kLlZzgv%40nathan
Backpatch-through: 18
The regression tests for TOAST compression methods are split into two
independent files: one specific to LZ4 and interactions between two
different TOAST compression methods, now called compression_lz4, and a
second one for the "core" cases where only pglz is required.
This saves 300 lines in diffs coming from the alternate output of
compression.sql, required for builds where lz4 is not available. The
new test is skipped if the build does not support LZ4 compression,
relying on an \if and the values reported in pg_settings for the GUC
default_toast_compression, "lz4" being available only under USE_LZ4.
Another benefit of this split is that this facilitates the addition of
more compression methods for TOAST, which are under discussion.
Note the trick added for the tests of the GUC default_toast_compression,
where VERBOSITY = terse is used to avoid the HINT printing the lists of
values available in the GUC, which are environment-dependent. This
makes compression.sql independent of the availability of LZ4.
The code coverage of toast_compression.c is slightly improved, increased
from 89% to 91%, with one new case covered in lz4_compress_datum() for
incompressible data.
Author: Nikhil Kumar Veldanda <veldanda.nikhilkumar17@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aDlcU-ym9KfMj9sG@paquier.xyz
The terms used in wait_event_names.txt and lwlock.c were inconsistent
for MultiXactOffsetSLRU and MultiXactMemberSLRU, which could cause joins
between pg_wait_events and pg_stat_activity to fail. lwlock.c is
adjusted in this commit to what the historical name of the event has
always been, and what is documented.
Oversight in 53c2a97a9266. 08b9b9e043bb has fixed a similar
inconsistency some time ago.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/aHdxN0D0hKXzHFQG@ip-10-97-1-34.eu-west-3.compute.internal
Backpatch-through: 17
The paragraph for introducing INSERT and COPY discussed how a file
could be used for bulk loading with COPY, without actually showing
what the file would look like. This adds a programlisting for the
file contents.
Backpatch to all supported branches since this example has lacked
the file contents since PostgreSQL 7.2.
Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/158017814191.19852.15019251381150731439@wrigleys.postgresql.org
Backpatch-through: 13
Avoid dependence on setlocale().
strcoll(), etc., are not called directly; all collation-sensitive
calls should go through pg_locale.c and use the appropriate
provider. By setting LC_COLLATE to C, we avoid accidentally depending
on libc behavior when using a different provider.
No behavior change in the backend, but it's possible that some
extensions will be affected. Such extensions should be updated to use
the pg_locale_t APIs.
Discussion: https://postgr.es/m/9875f7f9-50f1-4b5d-86fc-ee8b03e8c162@eisentraut.org
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
We skip dumping constraints together with domains if they are invalid
('separate') so that they appear after data -- but their comments were
dumped together with the domain definition, which in effect leads to the
comment being dumped when the constraint does not yet exist. Delay
them in the same way.
Oversight in 7eca575d1c28; backpatch all the way back.
Author: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxF_C2pe6J_+nPr6C5jf5rQnbYP8XOKr4HM8yHZtp2aQqQ@mail.gmail.com
_bt_first need only store one ScanKeyData struct on the stack for the
purposes of building an IS NOT NULL key based on an implied NOT NULL
constraint. We don't need INDEX_MAX_KEYS-many ScanKeyData structs.
This saves us a little over 2KB in stack space. It's possible that this
has some performance benefit. It also seems simpler and more direct.
It isn't possible for more than a single index attribute to need its own
implied IS NOT NULL key: the first such attribute/IS NOT NULL key always
makes _bt_first stop adding additional boundary keys to startKeys[].
Using INDEX_MAX_KEYS-many ScanKeyData entries was (at best) misleading.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Mircea Cadariu <cadariu.mircea@gmail.com>
Discussion: https://postgr.es/m/CAH2-Wzm=1kJMSZhhTLoM5BPbwQNWxUj-ynOEh=89ptDZAVgauw@mail.gmail.com
Previously, pg_dumpall would still dump global objects such as roles
and tablespaces even when --statistics-only or --no-schema was specified.
Since these global objects are treated as schema-level data, they should
be skipped in these cases.
This commit fixes the issue by ensuring that global objects are not
dumped when either --statistics-only or --no-schema is used.
Author: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://postgr.es/m/08129593-6f3c-4fb9-94b7-5aa2eefb99b0@oss.nttdata.com
Backpatch-through: 18
This adjusts the wording to match the changes in commits
5987553fde, a233a603ba, and pgweb commit 2d764dbc08.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/aHVo791guQR6uqwT%40nathan
Backpatch-through: 13
This code used a NO_LZ4_SUPPORT() macro to issue an error in the code
paths where LZ4 [de]compression is attempted but the build does not
support it. This commit refactors the code to use a more flexible error
message so as it can be used for other compression methods, where the
method is given in input of macro.
Extracted from a larger patch by the same author.
Author: Nikhil Kumar Veldanda <veldanda.nikhilkumar17@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Discussion: https://postgr.es/m/CAFAfj_HX84EK4hyRYw50AOHOcdVi-+FFwAAPo7JHx4aShCvunQ@mail.gmail.com
The pgoutput plugin initializes optional parameters like "binary" with
default values at the start of processing. However, the "origin"
parameter was previously missed and left without explicit initialization.
Although the PGOutputData struct, which holds these settings,
is zero-initialized at allocation (resulting in publish_no_origin field
for "origin" parameter being false by default), this default was not
set explicitly, unlike other parameters.
This commit adds explicit initialization of the "origin" parameter to
ensure consistency and clarity in how defaults are handled.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Discussion: https://postgr.es/m/d2790f10-238d-4cb5-a743-d9d2a9dd900f@oss.nttdata.com
The pgoutput plugin options are described in the logical streaming
replication protocol documentation, but their default values were
previously not mentioned. This made it less convenient for users,
for example, when specifying those options to use pg_recvlogical
with pgoutput plugin.
This commit adds the explanations of the default values for pgoutput
options to improve clarity and usability.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/d2790f10-238d-4cb5-a743-d9d2a9dd900f@oss.nttdata.com
Previously, the documentation described the streaming option as a boolean,
which is outdated since it's no longer a boolean as of protocol version 4.
This could confuse users.
This commit updates the description to remove the "boolean" reference and
clearly list the valid values for the streaming option.
Back-patch to v16, where the streaming option changed to a non-boolean.
Author: Euler Taveira <euler@eulerto.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/8d21fb98-5c25-4dee-8387-e5a62b01ea7d@app.fastmail.com
Backpatch-through: 16
The last_vacuum and vacuum_count fields in pg_stat_all_tables already
state that they do not include VACUUM FULL. However, total_vacuum_time,
which also excludes VACUUM FULL, did not mention this. This could
mislead users into thinking VACUUM FULL time is included.
To address this, this commit updates the documentation for
pg_stat_all_tables to explicitly state that total_vacuum_time does not
count VACUUM FULL.
Back-patched to v18, where total_vacuum_time was introduced.
Additionally, this commit clarifies that n_ins_since_vacuum also
excludes VACUUM FULL. Although n_ins_since_vacuum was added in v13,
we are not back-patching this change to stable branches, as it is
a documentation improvement, not a bug fix.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Discussion: https://postgr.es/m/2ac375d1-591b-4f1b-a2af-f24335567866@oss.nttdata.com
Backpatch-through: 18
The grammar was a little shaky and confusing here, so word-smith it
a bit. Also, adjust the comments in pg_ident.conf.sample to use the
same terminology as the SGML docs, in particular "DATABASE-USERNAME"
not "PG-USERNAME".
Back-patch appropriate subsets. I did not risk changing
pg_ident.conf.sample in released branches, but it still seems OK
to change it in v18.
Reported-by: Alexey Shishkin <alexey.shishkin@enterprisedb.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/175206279327.3157504.12519088928605422253@wrigleys.postgresql.org
Backpatch-through: 13
It's impossible to reach this case with either ra or rb being
WJB_DONE, because our earlier checks that the structure and
length of the inputs match should guarantee that we reach their
ends simultaneously. However, the comment completely fails to
explain this, and the Asserts don't cover it either. The comment
is pretty obscure anyway, so rewrite it, and extend the Asserts
to reject WJB_DONE.
This is only cosmetic, so no need for back-patch.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/0c623e8a204187b87b4736792398eaf1@postgrespro.ru
Because not every path through JsonbIteratorNext() sets val->type,
some compilers complain that compareJsonbContainers() is comparing
possibly-uninitialized values. The paths that don't set it return
WJB_DONE, WJB_END_ARRAY, or WJB_END_OBJECT, so it's clear by
manual inspection that the "(ra == rb)" code path is safe, and
indeed we aren't seeing warnings about that. But the (ra != rb)
case is much less obviously safe. In Assert-enabled builds it
seems that the asserts rejecting WJB_END_ARRAY and WJB_END_OBJECT
persuade gcc 15.x not to warn, which makes little sense because
it's impossible to believe that the compiler can prove of its
own accord that ra/rb aren't WJB_DONE here. (In fact they never
will be, so the code isn't wrong, but why is there no warning?)
Without Asserts, the appearance of warnings is quite unsurprising.
We discussed fixing this by converting those two Asserts into
pg_assume, but that seems not very satisfactory when it's so unclear
why the compiler is or isn't warning: the warning could easily
reappear with some other compiler version. Let's fix it in a less
magical, more future-proof way by changing JsonbIteratorNext()
so that it always does set val->type. The cost of that should be
pretty negligible, and it makes the function's API spec less squishy.
Reported-by: Erik Rijkers <er@xs4all.nl>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/988bf1bc-3f1f-99f3-bf98-222f1cd9dc5e@xs4all.nl
Discussion: https://postgr.es/m/0c623e8a204187b87b4736792398eaf1@postgrespro.ru
Backpatch-through: 13
Minor wordsmithing of the func.sgml paragraph describing
statement_timestamp() and allied functions: don't switch between
"statement" and "command" when those are being used to mean about
the same thing.
Also, add some text to protocol.sgml describing the perhaps-surprising
behavior these functions have in a multi-statement Query message.
Reported-by: P M <petermittere@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/175223006802.3157505.14764328206246105568@wrigleys.postgresql.org
Backpatch-through: 13
Previously, when pressing Tab after GRANT or REVOKE ... ON LARGE OBJECT
or ON FOREIGN SERVER, TO or FROM was incorrectly suggested by psql's
tab-completion. This was not appropriate, as those clauses are not valid
at that point.
This commit fixes the issue by preventing TO and FROM from being offered
immediately after those specific GRANT/REVOKE statements.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/20250408122857.b2b06dde4e6a08290af02336@sraoss.co.jp
Previously, amcheck could produce misleading error message when
a partitioned index was passed to functions like bt_index_check().
For example, bt_index_check() with a partitioned btree index produced:
ERROR: expected "btree" index as targets for verification
DETAIL: Relation ... is a btree index.
Reporting "expected btree index as targets" even when the specified
index was a btree was confusing. In this case, the function should fail
since the partitioned index specified is not valid target. This commit
improves the error reporting to better reflect this actual issue. Now,
bt_index_check() with a partitioned index, the error message is:
ERROR: expected index as targets for verification
DETAIL: This operation is not supported for partitioned indexes.
This commit also applies the following minor changes:
- Simplifies index_checkable() by using get_am_name() to retrieve
the access method name.
- Changes index_checkable() from extern to static, as it is only used
in verify_common.c.
- Updates the error code for invalid indexes to
ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE,
aligning with usage in similar modules like pgstattuple.
Author: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/8829854bbfc8635ddecd0846bb72dfda@oss.nttdata.com
This new psql variable can be used to check which service file has been
used for a connection. Like other variables, this can be set in a
PROMPT or reported by an \echo, like these commands:
\echo :SERVICEFILE
\set PROMPT1 '=(%:SERVICEFILE:)%# '
This relies on commits 092f3c63efc6 and fef6da9e9c87 to retrieve this
information from the connection's PQconninfoOption.
Author: Ryo Kanbayashi <kanbayashi.dev@gmail.com>
Discussion: https://postgr.es/m/CAKkG4_nCjx3a_F3gyXHSPWxD8Sd8URaM89wey7fG_9g7KBkOCQ@mail.gmail.com
If the system-name field of a pg_ident.conf line is a regex
containing capturing parentheses, you can write \1 in the
user-name field to represent the captured part of the system
name. But what happens if you write \1 more than once?
The only reasonable expectation IMO is that each \1 gets
replaced, but presently our code replaces only the first.
Fix that.
Also, improve the tests for this feature to exercise cases
where a non-empty string needs to be substituted for \1.
The previous testing didn't inspire much faith that it
was verifying correct operation of the substitution code.
Given the lack of field complaints about this, I don't
feel a need to back-patch.
Reported-by: David G. Johnston <david.g.johnston@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAKFQuwZu6kZ8ZPvJ3pWXig+6UX4nTVK-hdL_ZS3fSdps=RJQQQ@mail.gmail.com
This commit adds the possibility to specify a service file in a
connection string, using a new option called "servicefile". The parsing
of the service file happens so as things are done in this order of
priority:
- The servicefile connection option.
- Environment variable PGSERVICEFILE.
- Default path, depending on the HOME environment.
Note that in the last default case, we need to fill in "servicefile" for
the connection's PQconninfoOption to let clients know which service file
has been used for the connection. Some TAP tests are added, with a few
tweaks required for Windows when using URIs or connection option values,
for the location paths.
Author: Torsten Förtsch <tfoertsch123@gmail.com>
Co-authored-by: Ryo Kanbayashi <kanbayashi.dev@gmail.com>
Discussion: https://postgr.es/m/CAKkG4_nCjx3a_F3gyXHSPWxD8Sd8URaM89wey7fG_9g7KBkOCQ@mail.gmail.com
A few code paths set this variable, but its value is never used.
Oversight in commit 2fc7af5e96.
Reviewed-by: Aleksander Alekseev <aleksander@tigerdata.com>
Discussion: https://postgr.es/m/aHFyE1bs9YR93dQ1%40nathan
The values of the "result" variables in these functions are
always integers; using a float8 variable accomplishes nothing
except to incur useless conversions to and from float. While
that wastes a few nanoseconds, these functions aren't all that
time-critical. But it seems worth fixing to remove possible
reader confusion.
Also, in the case of date2isoyear(), "result" is a very poorly
chosen variable name because it is *not* the function's result.
Rename it to "week", and do the same in date2isoweek() for
consistency.
Since this is mostly cosmetic, there seems little need
for back-patch.
Author: Sergey Fukanchik <s.fukanchik@postgrespro.ru>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6323a-68726500-1-7def9d00@137821581
method_worker.c installed SignalHandlerForConfigReload, but it failed to
actually process reload requests. That hasn't yet produced any concrete
problem reports in terms of GUC changes it should have cared about in
v18, but it was inconsistent.
It did cause problems for a couple of patches in development that need
IO workers to react to ALTER SYSTEM + pg_reload_conf(). Fix extracted
from one of those patches.
Back-patch to 18.
Reported-by: Dmitry Dolgov <9erthalion6@gmail.com>
Discussion: https://postgr.es/m/sh5uqe4a4aqo5zkkpfy5fobe2rg2zzouctdjz7kou4t74c66ql%40yzpkxb7pgoxf
In an ancient ancestor of this code, the postmaster assigned IDs to IO
workers. Now it tracks them in an unordered array and doesn't know
their IDs, so it might be confusing to readers that it still referred to
their indexes as IDs.
No change in behavior, just variable name and error message cleanup.
Back-patch to 18.
Discussion: https://postgr.es/m/CA%2BhUKG%2BwbaZZ9Nwc_bTopm4f-7vDmCwLk80uKDHj9mq%2BUp0E%2Bg%40mail.gmail.com
Adopt PgAioXXX convention for pgaio module type names. Rename a
function that didn't use a pgaio_worker_ submodule prefix. Rename the
internal submit function's arguments to match the indirectly relevant
function pointer declaration and nearby examples. Rename the array of
handle IDs in PgAioSubmissionQueue to sqes, a term of art seen in the
systems it emulates, also clarifying that they're not IO handle
pointers as the old name might imply.
No change in behavior, just type, variable and function name cleanup.
Back-patch to 18.
Discussion: https://postgr.es/m/CA%2BhUKG%2BwbaZZ9Nwc_bTopm4f-7vDmCwLk80uKDHj9mq%2BUp0E%2Bg%40mail.gmail.com
getid() and putid(), which parse and deparse role names within ACL
input/output, applied isalnum() to see if a character within a role
name requires quoting. They did this even for non-ASCII characters,
which is problematic because the results would depend on encoding,
locale, and perhaps even platform. So it's possible that putid()
could elect not to quote some string that, later in some other
environment, getid() will decide is not a valid identifier, causing
dump/reload or similar failures.
To fix this in a way that won't risk interoperability problems
with unpatched versions, make getid() treat any non-ASCII as a
legitimate identifier character (hence not requiring quotes),
while making putid() treat any non-ASCII as requiring quoting.
We could remove the resulting excess quoting once we feel that
no unpatched servers remain in the wild, but that'll be years.
A lesser problem is that getid() did the wrong thing with an input
consisting of just two double quotes (""). That has to represent an
empty string, but getid() read it as a single double quote instead.
The case cannot arise in the normal course of events, since we don't
allow empty-string role names. But let's fix it while we're here.
Although we've not heard field reports of problems with non-ASCII
role names, there's clearly a hazard there, so back-patch to all
supported versions.
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/3792884.1751492172@sss.pgh.pa.us
Backpatch-through: 13
Commit b0635bfda split off the CPPFLAGS/LDFLAGS/LDLIBS for libcurl into
their own separate Makefile variables, but I neglected to move the
existing AC_CHECKs for Curl into a place where they would make use of
those variables. They instead tested the system libcurl, which 1) is
unhelpful if a different Curl is being used for the build and 2) will
fail the build entirely if no system libcurl exists. Correct the order
of operations here.
Reported-by: Ivan Kush <ivan.kush@tantorlabs.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Ivan Kush <ivan.kush@tantorlabs.com>
Discussion: https://postgr.es/m/8a611028-51a1-408c-b592-832e2e6e1fc9%40tantorlabs.com
Backpatch-through: 18
This option, which is disabled by default, can be used to request
the checkpoint also flush dirty buffers of unlogged relations. As
with the MODE option, the server may consolidate the options for
concurrently requested checkpoints. For example, if one session
uses (FLUSH_UNLOGGED FALSE) and another uses (FLUSH_UNLOGGED TRUE),
the server may perform one checkpoint with FLUSH_UNLOGGED enabled.
Author: Christoph Berg <myon@debian.org>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Discussion: https://postgr.es/m/aDnaKTEf-0dLiEfz%40msg.df7cb.de
This option may be set to FAST (the default) to request the
checkpoint be completed as fast as possible, or SPREAD to request
the checkpoint be spread over a longer interval (based on the
checkpoint-related configuration parameters). Note that the server
may consolidate the options for concurrently requested checkpoints.
For example, if one session requests a "fast" checkpoint and
another requests a "spread" checkpoint, the server may perform one
"fast" checkpoint.
Author: Christoph Berg <myon@debian.org>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Discussion: https://postgr.es/m/aDnaKTEf-0dLiEfz%40msg.df7cb.de
This commit adds the boilerplate code for supporting a list of
options in CHECKPOINT commands. No actual options are supported
yet, but follow-up commits will add support for MODE and
FLUSH_UNLOGGED. While at it, this commit refactors the code for
executing CHECKPOINT commands to its own function since it's about
to become significantly larger.
Author: Christoph Berg <myon@debian.org>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Discussion: https://postgr.es/m/aDnaKTEf-0dLiEfz%40msg.df7cb.de
The new name more accurately reflects the effects of this flag on a
requested checkpoint. Checkpoint-related log messages (i.e., those
controlled by the log_checkpoints configuration parameter) will now
say "fast" instead of "immediate", too. Likewise, references to
"immediate" checkpoints in the documentation have been updated to
say "fast". This is preparatory work for a follow-up commit that
will add a MODE option to the CHECKPOINT command.
Author: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/aDnaKTEf-0dLiEfz%40msg.df7cb.de
The new name more accurately relects the effects of this flag on a
requested checkpoint. Checkpoint-related log messages (i.e., those
controlled by the log_checkpoints configuration parameter) will now
say "flush-unlogged" instead of "flush-all", too. This is
preparatory work for a follow-up commit that will add a
FLUSH_UNLOGGED option to the CHECKPOINT command.
Author: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/aDnaKTEf-0dLiEfz%40msg.df7cb.de
We already forced LC_MESSAGES to C in order to get consistent
message output, but that isn't enough to stabilize messages
that include %f or similar formatting.
I'm a bit surprised that this hasn't come up before. Perhaps
we ought to back-patch this change, but I'll refrain for now.
Reported-by: Bernd Helmle <mailings@oopsware.de>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6f024eaa7885eddf5e0eb4ba1d095fbc7146519b.camel@oopsware.de
Previously, the check_hook functions for max_slot_wal_keep_size and
idle_replication_slot_timeout would incorrectly raise an ERROR for values
set in postgresql.conf during upgrade, even though those values were not
actively used in the upgrade process.
To prevent logical slot invalidation during upgrade, we used to set
special values for these GUCs. Now, instead of relying on those values, we
directly prevent WAL removal and logical slot invalidation caused by
max_slot_wal_keep_size and idle_replication_slot_timeout.
Note: PostgreSQL 17 does not include the idle_replication_slot_timeout
GUC, so related changes were not backported.
BUG #18979
Reported-by: jorsol <jorsol@gmail.com>
Author: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed by: vignesh C <vignesh21@gmail.com>
Reviewed by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Backpatch-through: 17, where it was introduced
Discussion: https://postgr.es/m/219561.1751826409@sss.pgh.pa.us
Discussion: https://postgr.es/m/18979-a1b7fdbb7cd181c6@postgresql.org
In the description of StartupMessage, the protocol version was left
3.0. Instead of just updating it, this commit removes the hard coded
protocol version and shows the numbers as an example. This makes that
the part of the doc does not need to be updated when the version is
changed in the future.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Tatsuo Ishii <ishii@postgresql.org>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://postgr.es/m/20250626.155608.568829483879866256.ishii%40postgresql.org
This commit updates the documentation to clarify that "idle" in
idle_replication_slot_timeout means the replication slot is inactive,
that is, not currently used by any replication connection.
Without this clarification, "idle" could be misinterpreted to mean
that the slot is not advancing or that no data is being streamed,
even if a connection exists.
Back-patch to v18 where idle_replication_slot_timeout was added.
Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Gunnar Morling <gunnar.morling@googlemail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CADGJaX_0+FTguWpNSpgVWYQP_7MhoO0D8=cp4XozSQgaZ40Odw@mail.gmail.com
Backpatch-through: 18
Previously, the idle_replication_slot_timeout parameter used minutes
as its unit, based on the assumption that values would typically exceed
one minute in production environments. However, this caused unexpected
behavior: specifying a value below 30 seconds would round down to 0,
effectively disabling the timeout. This could be surprising to users.
To allow finer-grained control and avoid such confusion, this commit changes
the unit of idle_replication_slot_timeout to seconds. Larger values can
still be specified easily using standard time suffixes, for example,
'24h' for 24 hours.
Back-patch to v18 where idle_replication_slot_timeout was added.
Reported-by: Gunnar Morling <gunnar.morling@googlemail.com>
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CADGJaX_0+FTguWpNSpgVWYQP_7MhoO0D8=cp4XozSQgaZ40Odw@mail.gmail.com
Backpatch-through: 18
When sslkeylogfile has been set but the file fails to open in an
otherwise successful connection, the log entry added to the conn
object is never printed. Instead print the error on stderr for
increased visibility. This is a debugging tool so using stderr
for logging is appropriate. Also while there, remove the umask
call in the callback as it's not useful.
Issues noted by Peter Eisentraut in post-commit review, backpatch
down to 18 when support for sslkeylogfile was added
Author: Daniel Gustafsson <daniel@yesql.se>
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/70450bee-cfaa-48ce-8980-fc7efcfebb03@eisentraut.org
Backpatch-through: 18
Commit a45c78e328 moved large object metadata from SECTION_PRE_DATA
to SECTION_DATA but neglected to move PRIO_LARGE_OBJECT in
dbObjectTypePriorities accordingly. While this hasn't produced any
known live bugs, it causes problems for a proposed patch that
optimizes upgrades with many large objects. Fixing the priority
might also make the topological sort step marginally faster by
reducing the number of ordering violations that have to be fixed.
Reviewed-by: Nitin Motiani <nitinmotiani@google.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aBkQLSkx1zUJ-LwJ%40nathan
Discussion: https://postgr.es/m/aG_5DBCjdDX6KAoD%40nathan
Backpatch-through: 17
During the development cycle of v18, btree_gist has been bumped once to
1.8 for the addition of translate_cmptype support functions (originally
7406ab623fee, renamed in 32edf732e8dc). 1.9 has added sortsupport
functions (e4309f73f698).
There is no need for two version bumps in a module for a single major
release of PostgreSQL. This commit unifies both upgrades to a single
SQL script, downgrading btree_gist to 1.8.
Author: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/13c61807-f702-4afe-9a8d-795e2fd40923@illuminatedcomputing.com
Backpatch-through: 18
This function can be used to retrieve the information about all the
injection points attached to a cluster, providing coverage for
InjectionPointList() introduced in 7b2eb72b1b8c.
The original proposal turned around a system function, but that would
not be backpatchable to stable branches. It was also a bit weird to
have a system function that fails depending on if the build allows
injection points or not.
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Rahila Syed <rahilasyed90@gmail.com>
Discussion: https://postgr.es/m/Z_xYkA21KyLEHvWR@paquier.xyz
Since b0635bfda, libpq uses dlopen() and related functions. On some
platforms these are not supplied by libc, but by a separate library
libdl, in which case we need to make sure that that dependency is
known to the linker. Meson seems to take care of that automatically,
but the Makefile didn't cater for it.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1328170.1752082586@sss.pgh.pa.us
Backpatch-through: 18
Increase the size of the "direct" histogram to 10K elements,
so that we can precisely track loop times up to 10 microseconds.
(Going further than that seems pretty uninteresting, even for
very old and slow machines.)
Relabel "Per loop time" as "Average loop time" for clarity.
Pre-zero the histogram arrays to make sure that they are loaded
into processor cache and any copy-on-write overhead has happened
before we enter the timing loop. Also use unlikely() to keep
the compiler from thinking that the clock-went-backwards case
is part of the hot loop. Neither of these hacks made a lot of
difference on my own machine, but they seem like they might help
on some platforms.
Discussion: https://postgr.es/m/be0339cc-1ae1-4892-9445-8e6d8995a44d@eisentraut.org
This commit adds a new system view that provides information about
entries in the dynamic shared memory (DSM) registry. Specifically,
it returns the name, type, and size of each entry. Note that since
we cannot discover the size of dynamic shared memory areas (DSAs)
and hash tables backed by DSAs (dshashes) without first attaching
to them, the size column is left as NULL for those.
Bumps catversion.
Author: Florents Tselai <florents.tselai@gmail.com>
Reviewed-by: Sungwoo Chang <swchangdev@gmail.com>
Discussion: https://postgr.es/m/4D445D3E-81C5-4135-95BB-D414204A0AB4%40gmail.com
Commit c273d9d8ce4 reworked tab-completion of COPY and \copy in psql
and added support for completing options within WITH clauses. However,
the same COPY options were suggested for both COPY TO and COPY FROM
commands, even though some options are only valid for one or the
other.
This commit separates the COPY options for COPY FROM and COPY TO
commands to provide more accurate auto-completion suggestions.
Back-patch to v14 where tab-completion for COPY and \copy options
within WITH clauses was first supported.
Author: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/079e7a2c801f252ae8d522b772790ed7@oss.nttdata.com
Backpatch-through: 14
This commit enhances psql's tab completion to support TO/FROM
after "GRANT/REVOKE ... ON LARGE OBJECT ...". Additionally,
since "ALTER DEFAULT PRIVILEGES" now supports large objects,
tab completion is also updated for "GRANT/REVOKE ... ON LARGE OBJECTS"
with TO/FROM.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/ade0ab29-777f-47f6-9d0d-1af67728a86e@oss.nttdata.com
This test corresponds to the case of a "service" defined in a service
file, that libpq is not able to support in parseServiceFile().
This has come up during the review of a patch to add more features in
this area, useful on its own. Piece extracted from a larger patch by
the same author.
Author: Ryo Kanbayashi <kanbayashi.dev@gmail.com>
Discussion: https://postgr.es/m/Zz2AE7NKKLIZTtEh@paquier.xyz
Clarified that the failover steps apply to a specific PostgreSQL subscriber
and added guidance for verifying replication slot synchronization during
planned failover. Additionally, corrected the standby query to avoid false
positives by checking invalidation_reason IS NULL instead of conflicting.
Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Author: Shveta Malik <shveta.malik@gmail.com>
Backpatch-through: 17, where it was introduced
Discussion: https://www.postgresql.org/message-id/CAExHW5uiZ-fF159=jwBwPMbjZeZDtmcTbN+hd4mrURLCg2uzJg@mail.gmail.com
This routine has been introduced as a shortcut to be able to retrieve a
service name from an active connection, for psql. Per discussion, and
as it is only used by psql, let's remove it to not clutter the libpq API
more than necessary.
The logic in psql is replaced by lookups of PQconninfoOption for the
active connection, instead, updated each time the variables are synced
by psql, the prompt shortcut relying on the variable synced.
Reported-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20250706161319.c1.nmisch@google.com
Backpatch-through: 18
What we want in these places is "xmlChar *volatile ptr",
not "volatile xmlChar *ptr". The former means that the
pointer variable itself needs to be treated as volatile,
while the latter says that what it points to is volatile.
Since the point here is to ensure that the pointer variables
don't go crazy after a longjmp, it's the former semantics
that we need. The misplacement of "volatile" also led
to needing to cast away volatile in some places.
Also fix a number of places where variables that are assigned to
within a PG_TRY and then used after it were not initialized or
not marked as volatile. (A few buildfarm members were issuing
"may be used uninitialized" warnings about some of these variables,
which is what drew my attention to this area.) In most cases
these variables were being set as the last step within the PG_TRY
block, which might mean that we could get away without the "volatile"
marking. But doing that seems unsafe and is definitely not per our
coding conventions.
These problems seem to have come in with 732061150, so no need
for back-patch.
xmltotext_with_options() did not consider the possibility that
pg_xml_init() could fail --- most likely due to OOM. If that
happened, the already-parsed xmlDoc structure would be leaked.
Oversight in commit 483bdb2af.
Bug: #18981
Author: Dmitry Kovalenko <d.kovalenko@postgrespro.ru>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18981-9bc3c80f107ae925@postgresql.org
Backpatch-through: 16
Most of our platforms have better-than-microsecond timing resolution,
so the original definition of this program is getting less and less
useful. Make it report nanoseconds not microseconds. Also, add a
second output table that reports the exact observed timing durations,
up to a limit of 1024 ns; and be sure to report the largest observed
duration.
The documentation for this program included a lot of system-specific
details that now seem largely obsolete. Move all that text to the
PG wiki, where perhaps it will be easier to maintain and update.
Also, improve the TAP test so that it actually runs a short standard
run, allowing most of the code to be exercised; its coverage before
was abysmal.
Author: Hannu Krosing <hannuk@google.com>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/be0339cc-1ae1-4892-9445-8e6d8995a44d@eisentraut.org
Per buildfarm member culicidae, the query checking for stats reported by
the WAL summarizer related to WAL reads is proving to be unstable.
Instead of a one-time query, this commit replaces the logic with a
polling query checking for the WAL read stats, making the test more
reliable on machines that could be slow with the stats reports.
This test has been introduced in f4694e0f35b2, so backpatch down to v18.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/f35ba3db-fca7-4693-bc35-6db64488e4b1@gmail.com
Backpatch-through: 18
By default io_uring creates a shared memory mapping for each io_uring
instance, leading to a large number of memory mappings. Unfortunately a large
number of memory mappings slows things down, backend exit is particularly
affected. To address that, newer kernels (6.5) support using user-provided
memory for the memory. By putting the relevant memory into shared memory we
don't need any additional mappings.
On a system with a new enough kernel and liburing, there is no discernible
overhead when doing a pgbench -S -C anymore.
Reported-by: MARK CALLAGHAN <mdcallag@gmail.com>
Reviewed-by: "Burd, Greg" <greg@burd.me>
Reviewed-by: Jim Nasby <jnasby@upgrade.com>
Discussion: https://postgr.es/m/CAFbpF8OA44_UG+RYJcWH9WjF7E3GA6gka3gvH6nsrSnEe9H0NA@mail.gmail.com
Backpatch-through: 18
For an ordered Append or MergeAppend, we need to inject an explicit
sort into any subpath that is not already well enough ordered.
Currently, only explicit full sorts are considered; incremental sorts
are not yet taken into account.
In this patch, for subpaths of an ordered Append or MergeAppend, we
choose to use explicit incremental sort if it is enabled and there are
presorted keys.
The rationale is based on the assumption that incremental sort is
always faster than full sort when there are presorted keys, a premise
that has been applied in various parts of the code. In addition, the
current cost model tends to favor incremental sort as being cheaper
than full sort in the presence of presorted keys, making it reasonable
not to consider full sort in such cases.
No backpatch as this could result in plan changes.
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/CAMbWs4_V7a2enTR+T3pOY_YZ-FU8ZsFYym2swOz4jNMqmSgyuw@mail.gmail.com
In b0635bfda, I added an early header check to the Meson OAuth support,
which was intended to duplicate the later checks for
HAVE_SYS_[EVENT|EPOLL]_H. However, I implemented the new test via
check_header() -- which tries to compile -- rather than has_header(),
which just looks for the file's existence.
The distinction matters on OpenBSD, where <sys/event.h> can't be
compiled without including prerequisite headers, so -Dlibcurl=enabled
failed on that platform. Switch to has_header() to fix this.
Note that reviewers expressed concern about the difference between our
Autoconf feature tests (which compile headers) and our Meson feature
tests (which do not). I'm not opposed to aligning the two, but I want to
avoid making bigger changes as part of this fix.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/flat/CAOYmi+kdR218ke2zu74oTJvzYJcqV1MN5=mGAPqZQuc79HMSVA@mail.gmail.com
Backpatch-through: 18
Commit 2633dae2e487 added some zero padding to various LSNs output
routines so that the low word is always 8 hex digits long, for easy
human consumption. This included the pg_lsn datatype, which breaks the
pg_upgrade test when it compares the pg_dump output of an older version.
Silence this problem by setting the pg_lsn columns to NULL before the
upgrade.
Discussion: https://postgr.es/m/202507071504.xm2r26u7lmzr@alvherre.pgsql
pl/pgsql's notion of an "expression" is very broad, encompassing
any SQL SELECT query that returns a single column and no more than
one row. So there are cases, for example evaluation of an aggregate
function, where the query involves significant work and it'd be useful
to run it with parallel workers. This used to be possible, but
commits 3eea7a0c9 et al unintentionally disabled it.
The simplest fix is to make exec_eval_expr() pass maxtuples = 0
rather than 2 to exec_run_select(). This avoids the new rule that
we will never use parallelism when a nonzero "count" limit is passed
to ExecutorRun(). (Note that the pre-3eea7a0c9 behavior was indeed
unsafe, so reverting that rule is not in the cards.) The reason
for passing 2 before was that exec_eval_expr() will throw an error
if it gets more than one returned row, so we figured that as soon
as we have two rows we know that will happen and we might as well
stop running the query. That choice was cost-free when it was made;
but disabling parallelism is far from cost-free, so now passing 2
amounts to optimizing a failure case at the expense of useful cases.
An expression query that can return more than one row is certainly
broken. People might now need to wait a bit longer to discover such
breakage; but hopefully few will use enormously expensive cases as
their first test of new pl/pgsql logic.
Author: Dipesh Dhameliya <dipeshdhameliya125@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CABgZEgdfbnq9t6xXJnmXbChNTcWFjeM_6nuig41tm327gYi2ig@mail.gmail.com
Backpatch-through: 13
Functions to bootstrap and zero pages in various SLRU callers were
fairly duplicative. We can slash almost two hundred lines with a couple
of simple helpers:
- SimpleLruZeroAndWritePage: Does the equivalent of SimpleLruZeroPage
followed by flushing the page to disk
- XLogSimpleInsertInt64: Does a XLogBeginInsert followed by XLogInsert
of a trivial record whose data is just an int64.
Author: Evgeny Voropaev <evgeny.voropaev@tantorlabs.com>
Reviewed by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed by: Andrey Borodin <x4mmm@yandex-team.ru>
Reviewed by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://www.postgresql.org/message-id/flat/97820ce8-a1cd-407f-a02b-47368fadb14b%40tantorlabs.com
This commit standardizes the output format for LSNs to ensure consistent
representation across various tools and messages. Previously, LSNs were
inconsistently printed as `%X/%X` in some contexts, while others used
zero-padding. This often led to confusion when comparing.
To address this, the LSN format is now uniformly set to `%X/%08X`,
ensuring the lower 32-bit part is always zero-padded to eight
hexadecimal digits.
Author: Japin Li <japinli@hotmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/ME0P300MB0445CA53CA0E4B8C1879AF84B641A@ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
This refactoring is a follow-up of the work done in 5a1dfde8334b, that
has switched 2PC file names to use FullTransactionIds when written on
disk. This will help with the integration of a follow-up solution
related to the handling of two-phase files during recovery, to address
older defects while reading these from disk after a crash.
This change is useful in itself as it reduces the need to build the
file names from epoch numbers and TransactionIds, because we can use
directly FullTransactionIds from which the 2PC file names are guessed.
So this avoids a lot of back-and-forth between the FullTransactionIds
retrieved from the file names and how these are passed around in the
internal 2PC logic.
Note that the core of the change is the use of a FullTransactionId
instead of a TransactionId in GlobalTransactionData, that tracks 2PC
file information in shared memory. The change in TwoPhaseCallback makes
this commit unfit for stable branches.
Noah has contributed a good chunk of this patch. I have spent some time
on it as well while working on the issues with two-phase state files and
recovery.
Author: Noah Misch <noah@leadboat.com>
Co-Authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/Z5sd5O9JO7NYNK-C@paquier.xyz
Discussion: https://postgr.es/m/20250116205254.65.nmisch@google.com
libxml2 has deprecated the members of xmlBuffer, and it is recommended
to access them with dedicated routines. We have only one case in the
tree where this shows an impact: xml2/xpath.c where "content" was
getting directly accessed. The rest of the code looked fine, checking
the PostgreSQL code with libxml2 close to the top of its "2.14" branch.
xmlBufferContent() exists since year 2000 based on a check of the
upstream libxml2 tree, so let's switch to it.
Like 400928b83bd2, backpatch all the way down as this can have an impact
on all the branches already released once newer versions of libxml2 get
more popular.
Reported-by: Walid Ibrahim <walidib@amazon.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aGdSdcR4QTjEHX6s@paquier.xyz
Backpatch-through: 13
When estimating the cost/size of a pre-sorted path for a given upper
relation using local stats, this function dereferences the passed-in
PgFdwPathExtraData pointer without checking that it is not NULL. But
that is not a bug as the pointer is guaranteed to be non-NULL in that
case; to avoid confusion, add an Assert to ensure that it is not NULL
before dereferencing it.
Reported-by: Ranier Vilela <ranier.vf@gmail.com>
Author: Etsuro Fujita <etsuro.fujita@gmail.com>
Reviewed-by: Ranier Vilela <ranier.vf@gmail.com>
Discussion: https://postgr.es/m/CAEudQArgiALbV1akQpeZOgim7XP05n%3DbDP1%3DTcOYLA43nRX_vA%40mail.gmail.com
With tables defined like this,
CREATE TABLE ip (id int PRIMARY KEY);
CREATE TABLE ic (id int) INHERITS (ip);
ALTER TABLE ic ALTER id DROP NOT NULL;
pg_upgrade fails during the schema restore phase due to this error:
ERROR: column "id" in child table must be marked NOT NULL
This can only be fixed by marking the child column as NOT NULL before
the upgrade, which could take an arbitrary amount of time (because ic's
data must be scanned). Have pg_upgrade's check mode warn if that
condition is found, so that users know what to adjust before running the
upgrade for real.
Author: Ali Akbar <the.apaan@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Backpatch-through: 13
Discussion: https://postgr.es/m/CACQjQLoMsE+1pyLe98pi0KvPG2jQQ94LWJ+PTiLgVRK4B=i_jg@mail.gmail.com
Commit d70b17636dd introduced the IndexCheckableCallback typedef for
a callback function, but it was never used. This commit removes
the unused typedef to clean up dead code.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/e1ea4e14-3b21-4e01-a5f2-0686883265df@oss.nttdata.com
Attempting to use commit timestamps during bootstrapping leads to an
assertion failure, that can be reached for example with an initdb -c
that enables track_commit_timestamp. It makes little sense to register
a commit timestamp for a BootstrapTransactionId, so let's disable the
activation of the module in this case.
This problem has been independently reported once by each author of this
commit. Each author has proposed basically the same patch, relying on
IsBootstrapProcessingMode() to skip the use of commit_ts during
bootstrap. The test addition is a suggestion by me, and is applied down
to v16.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Author: Andy Fan <zhihuifan1213@163.com>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/OSCPR01MB14966FF9E4C4145F37B937E52F5102@OSCPR01MB14966.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/87plejmnpy.fsf@163.com
Backpatch-through: 13
Previously, truncating a temporary relation required scanning the entire
local buffer pool once per relation fork to invalidate buffers. This could
be slow, especially with a large local buffers, as the scan was repeated
multiple times.
A similar issue with regular tables (shared buffers) was addressed in
commit 6d05086c0a7 by scanning the buffer pool only once for all forks.
This commit applies the same optimization to temporary relations,
improving truncation performance.
Author: Daniil Davydov <3danissimo@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Maxim Orlov <orlovmg@gmail.com>
Discussion: https://postgr.es/m/CAJDiXggNqsJOH7C5co4jA8nDk8vw-=sokyh5s1_TENWnC6Ofcg@mail.gmail.com
If, after removal of useless null-constant arguments, a CoalesceExpr
has exactly one remaining argument, we can just take that argument as
the result, without bothering to wrap a new CoalesceExpr around it.
This isn't likely to produce any great improvement in runtime per se,
but it can lead to better plans since the planner no longer has to
treat the expression as non-strict.
However, there were a few regression test cases that intentionally
wrote COALESCE(x) as a shorthand way of creating a non-strict
subexpression. To avoid ruining the intent of those tests, write
COALESCE(x,x) instead. (If anyone ever proposes de-duplicating
COALESCE arguments, we'll need another iteration of this arms race.
But it seems pretty unlikely that such an optimization would be
worthwhile.)
Author: Maksim Milyutin <maksim.milyutin@tantorlabs.ru>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/8e8573c3-1411-448d-877e-53258b7b2be0@tantorlabs.ru
Using the just-added infrastructure, extend btree_gin to support
cross-type operators in its other opclasses. All of the cross-type
comparison operators supported by the core btree opclasses for
these datatypes are now available for btree_gin indexes as well.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Arseniy Mukhin <arseniy.mukhin.dev@gmail.com>
Discussion: https://postgr.es/m/262624.1738460652@sss.pgh.pa.us
Extend the infrastructure in btree_gin.c to permit cross-type
operators, and add the code to support them for the int2, int4,
and int8 opclasses. (To keep this patch digestible, I left
the other datatypes for a separate patch.) This improves the
usability of btree_gin indexes by allowing them to support the
same set of queries that a regular btree index does.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Arseniy Mukhin <arseniy.mukhin.dev@gmail.com>
Discussion: https://postgr.es/m/262624.1738460652@sss.pgh.pa.us
Previous commits invented timestamp2timestamptz_opt_overflow,
date2timestamp_opt_overflow, and date2timestamptz_opt_overflow
functions to perform non-error-throwing conversions between
datetime types. This patch completes the set by adding
timestamp2date_opt_overflow, timestamptz2date_opt_overflow,
and timestamptz2timestamp_opt_overflow.
In addition, adjust timestamp2timestamptz_opt_overflow so that it
doesn't throw error if timestamp2tm fails, but treats that as an
overflow case. The situation probably can't arise except with an
invalid timestamp value, and I can't think of a way that that would
happen except data corruption. However, it's pretty silly to have a
function whose entire reason for existence is to not throw errors for
out-of-range inputs nonetheless throw an error for out-of-range input.
The new APIs are not used in this patch, but will be needed in
upcoming btree_gin changes.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Arseniy Mukhin <arseniy.mukhin.dev@gmail.com>
Discussion: https://postgr.es/m/262624.1738460652@sss.pgh.pa.us
Commits 8319e5cb5 et al missed the fact that ATPostAlterTypeCleanup
contains three calls to ATPostAlterTypeParse, and the other two
also need protection against passing a relid that we don't yet
have lock on. Add similar logic to those code paths, and add
some test cases demonstrating the need for it.
In v18 and master, the test cases demonstrate that there's a
behavioral discrepancy between stored generated columns and virtual
generated columns: we disallow changing the expression of a stored
column if it's used in any composite-type columns, but not that of
a virtual column. Since the expression isn't actually relevant to
either sort of composite-type usage, this prohibition seems
unnecessary; but changing it is a matter for separate discussion.
For now we are just documenting the existing behavior.
Reported-by: jian he <jian.universality@gmail.com>
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: CACJufxGKJtGNRRSXfwMW9SqVOPEMdP17BJ7DsBf=tNsv9pWU9g@mail.gmail.com
Backpatch-through: 13
The command is: ALTER TABLE x ADD [CONSTRAINT y] NOT NULL z
This syntax was added in 18, but I got pushback for getting commit
dbf42b84ac7b in 18 (also tab-completion for new syntax) after the
feature freeze, so I'll put this in master only for now.
Author: Álvaro Herrera <alvherre@kurilemu.de>
Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Discussion: https://postgr.es/m/d4f14c6b-086b-463c-b15f-01c7c9728eab@oss.nttdata.com
Discussion: https://postgr.es/m/202505111448.bwbfomrymq4b@alvherre.pgsql
Commit 08aa89b3262 removed the COMMIT_TS_SETTS WAL record,
leaving xl_commit_ts_set and SizeOfCommitTsSet unused. However,
it missed removing these definitions. This commit cleans up
the leftover code.
Since this is a cleanup rather than a bug fix, it is applied only to
the master branch.
Author: Andy Fan <zhihuifan1213@163.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/87ecuzmkqf.fsf@163.com
The documentation for pg_replication_slots previously mentioned only
max_slot_wal_keep_size as a condition under which the wal_status column
could show unreserved or lost. However, since commit be87200,
replication slots can also be invalidated due to horizon or wal_level,
and since commit ac0e33136ab, idle_replication_slot_timeout can also
trigger this state.
This commit updates the description of the wal_status column to
reflect that max_slot_wal_keep_size is not the only cause of the lost state.
Back-patched to v16, where the additional invalidation cases were introduced.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Discussion: https://postgr.es/m/78b34e84-2195-4f28-a151-5d204a382fdd@oss.nttdata.com
Backpatch-through: 16
If certain constraint characteristic clauses (NO INHERIT, NOT VALID, NOT
ENFORCED) are given to CREATE CONSTRAINT TRIGGER, the resulting error
message is
ERROR: TRIGGER constraints cannot be marked NO INHERIT
which is a bit silly, because these aren't "constraints of type
TRIGGER". Hardcode a better error message to prevent it. This is a
cosmetic fix for quite a fringe problem with no known complaints from
users, so no backpatch.
While at it, silently accept ENFORCED if given.
Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: jian he <jian.universality@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CAAJ_b97hd-jMTS7AjgU6TDBCzDx_KyuKxG+K-DtYmOieg+giyQ@mail.gmail.com
Discussion: https://postgr.es/m/CACJufxHSp2puxP=q8ZtUGL1F+heapnzqFBZy5ZNGUjUgwjBqTQ@mail.gmail.com
AlterDomainStmt.subtype used characters for its subtypes of commands,
SET|DROP DEFAULT|NOT NULL and ADD|DROP|VALIDATE CONSTRAINT, which were
hardcoded in a couple of places of the code. The code is improved by
using an enum instead, with the same character values as the original
code.
Note that the field was documented in parsenodes.h and that it forgot to
mention 'V' (VALIDATE CONSTRAINT).
Author: Quan Zongliang <quanzongliang@yeah.net>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/41ff310b-16bd-44b9-a3ef-97e20f14b709@yeah.net
The documentation previously stated that the wal_status column is NULL
if restart_lsn is NULL in the pg_replication_slots view. This is incorrect,
and wal_status can be "lost" even when restart_lsn is NULL.
This commit removes the incorrect description.
Back-patched to all supported versions.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Discussion: https://postgr.es/m/c9d23cdc-b5dd-455a-8ee9-f1f24d701d89@oss.nttdata.com
Backpatch-through: 13
The COPY FROM command now accepts a non-negative integer for the HEADER option,
allowing multiple header lines to be skipped. This is useful when the input
contains multi-line headers that should be ignored during data import.
Author: Shinya Kato <shinya11.kato@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/CAOzEurRPxfzbxqeOPF_AGnAUOYf=Wk0we+1LQomPNUNtyZGBZw@mail.gmail.com
Currently check_recovery_target_timeline() converts any value that is
not "current", "latest", or a valid integer to 0. So, for example, the
following configuration added to postgresql.conf followed by a startup:
recovery_target_timeline = 'bogus'
recovery_target_timeline = '9999999999'
... results in the following error patterns:
FATAL: 22023: recovery target timeline 0 does not exist
FATAL: 22023: recovery target timeline 1410065407 does not exist
This is confusing, because the server does not reflect the intention of
the user, and just reports incorrect data unrelated to the GUC.
The origin of the problem is that we do not perform a range check in the
GUC value passed-in for recovery_target_timeline. This commit improves
the situation by using strtou64() and by providing stricter range
checks. Some test cases are added for the cases of an incorrect, an
upper-bound and a lower-bound timeline value, checking the sanity of the
reports based on the contents of the server logs.
Author: David Steele <david@pgmasters.net>
Discussion: https://postgr.es/m/e5d472c7-e9be-4710-8dc4-ebe721b62cea@pgbackrest.org
Currently, we do not support Memoize for SEMI and ANTI joins because
nested loop SEMI/ANTI joins do not scan the inner relation to
completion, which prevents Memoize from marking the cache entry as
complete. One might argue that we could mark the cache entry as
complete after fetching the first inner tuple, but that would not be
safe: if the first inner tuple and the current outer tuple do not
satisfy the join clauses, a second inner tuple matching the parameters
would find the cache entry already marked as complete.
However, if the inner side is provably unique, this issue doesn't
arise, since there would be no second matching tuple. That said, this
doesn't help in the case of SEMI joins, because a SEMI join with a
provably unique inner side would already have been reduced to an inner
join by reduce_unique_semijoins.
Therefore, in this patch, we check whether the inner relation is
provably unique for ANTI joins and enable the use of Memoize in such
cases.
Author: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: wenhui qiu <qiuwenhuifx@gmail.com>
Reviewed-by: Andrei Lepikhov <lepihov@gmail.com>
Discussion: https://postgr.es/m/CAMbWs48FdLiMNrmJL-g6mDvoQVt0yNyJAqMkv4e2Pk-5GKCZLA@mail.gmail.com
This routine has come as a useful piece to be able to know the list of
injection points currently attached in a system. One area would be to
use it in a set-returning function, or just let out-of-core code play
with it.
This hides the internals of the shared memory array lookup holding the
information about the injection points (point name, library and function
name), allocating the result in a palloc'd List consumable by the
caller.
Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Rahila Syed <rahilasyed90@gmail.com>
Discussion: https://postgr.es/m/Z_xYkA21KyLEHvWR@paquier.xyz
Discussion: https://postgr.es/m/aBG2rPwl3GE7m1-Q@paquier.xyz
PQcancelCreate failed to copy struct pg_conn_host's "type" field,
instead leaving it zero (a/k/a CHT_HOST_NAME). This seemingly
has no great ill effects if it should have been CHT_UNIX_SOCKET
instead, but if it should have been CHT_HOST_ADDRESS then a
null-pointer dereference will occur when the cancelConn is used.
Bug: #18974
Reported-by: Maxim Boguk <maxim.boguk@gmail.com>
Author: Sergei Kornilov <sk@zsrv.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18974-575f02b2168b36b3@postgresql.org
Backpatch-through: 17
In commit fe07100e82, I renamed a couple of functions in
test_dsm_registry to make it clear what they are testing. However,
the buildfarm's cross-version upgrade tests run pg_upgrade with the
test modules installed, so this caused errors like:
ERROR: could not find function "get_val_in_shmem" in file ".../test_dsm_registry.so"
To fix, revert those renames. I could probably get away with only
un-renaming the C symbols, but I figured I'd avoid introducing
function name mismatches. Also, AFAICT the buildfarm's
cross-version upgrade tests do not run the test module tests
post-upgrade, else we'll need to properly version the extension.
Per buildfarm member crake.
Discussion: https://postgr.es/m/aGVuYUNW23tStUYs%40nathan
Presently, the dynamic shared memory (DSM) registry only provides
GetNamedDSMSegment(), which allocates a fixed-size segment. To use
the DSM registry for more sophisticated things like dynamic shared
memory areas (DSAs) or a hash table backed by a DSA (dshash), users
need to create a DSM segment that stores various handles and LWLock
tranche IDs and to write fairly complicated initialization code.
Furthermore, there is likely little variation in this
initialization code between libraries.
This commit introduces functions that simplify allocating a DSA or
dshash within the DSM registry. These functions are very similar
to GetNamedDSMSegment(). Notable differences include the lack of
an initialization callback parameter and the prohibition of calling
the functions more than once for a given entry in each backend
(which should be trivially avoidable in most circumstances). While
at it, this commit bumps the maximum DSM registry entry name length
from 63 bytes to 127 bytes.
Also note that even though one could presumably detach/destroy the
DSAs and dshashes created in the registry, such use-cases are not
yet well-supported, if for no other reason than the associated DSM
registry entries cannot be removed. Adding such support is left as
a future exercise.
The test_dsm_registry test module contains tests for the new
functions and also serves as a complete usage example.
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Florents Tselai <florents.tselai@gmail.com>
Reviewed-by: Rahila Syed <rahilasyed90@gmail.com>
Discussion: https://postgr.es/m/aEC8HGy2tRQjZg_8%40nathan
Restore nbtree preprocessing comments describing how we mark nbtree row
compare members required to how they were prior to 2016 bugfix commit
a298a1e0.
Oversight in commit bd3f59fd, which made nbtree preprocessing revert to
the original 2006 rules, but neglected to revert these comments.
Backpatch-through: 18
The array-based variant of width_bucket() has always accepted NaN
inputs, treating them as equal but larger than any non-NaN,
as we do in ordinary comparisons. But up to now, the four-argument
variants threw errors for a NaN operand. This is inconsistent
and unnecessary, since we can perfectly well regard NaN as falling
after the last bucket.
We do still throw error for NaN or infinity histogram-bound inputs,
since there's no way to compute sensible bucket boundaries.
Arguably this is a bug fix, but given the lack of field complaints
I'm content to fix it in master.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/2822872.1750540911@sss.pgh.pa.us
Trying to alter a constraint so that it becomes NOT VALID results in an
error that assumes the constraint is a foreign key. This is potentially
wrong, so give a more generic error message.
While at it, give CREATE CONSTRAINT TRIGGER a better error message as
well.
Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Co-authored-by: Amul Sul <sulamul@gmail.com>
Discussion: https://postgr.es/m/CACJufxHSp2puxP=q8ZtUGL1F+heapnzqFBZy5ZNGUjUgwjBqTQ@mail.gmail.com
Recent nbtree bugfix commit 5f4d98d4 added a special case to the code
that sets up a page-level prefix of keys that are definitely satisfied
by every tuple on the page: whenever _bt_set_startikey reached a row
compare key, we'd refuse to apply the pstate.forcenonrequired behavior
in scans where that usually happens (scans with a higher-order array
key). That hack made the scan avoid essentially the same infinite
cycling behavior that also affected nbtree scans with redundant keys
(keys that preprocessing could not eliminate) prior to commit f09816a0.
There are now serious doubts about this row compare workaround.
Testing has shown that a scan with a row compare key and an array key
could still read the same leaf page twice (without the scan's direction
changing), which isn't supposed to be possible following the SAOP
enhancements added by Postgres 17 commit 5bf748b8. Also, we still
allowed a required row compare key to be used with forcenonrequired mode
when its header key happened to be beyond the pstate.ikey set by
_bt_set_startikey, which was complicated and brittle.
The underlying problem was that row compares had inconsistent rules
around how scans start (which keys can be used for initial positioning
purposes) and how scans end (which keys can set continuescan=false).
Quals with redundant keys that could not be eliminated by preprocessing
also had that same quality to them prior to today's bugfix f09816a0. It
now seems prudent to bring row compare keys in line with the new charter
for required keys, by making the start and end rules symmetric.
This commit fixes two points of disagreement between _bt_first and
_bt_check_rowcompare. Firstly, _bt_check_rowcompare was capable of
ending the scan at the point where it needed to compare an ISNULL-marked
row compare member that came immediately after a required row compare
member. _bt_first now has symmetric handling for NULL row compares.
Secondly, _bt_first had its own ideas about which keys were safe to use
for initial positioning purposes. It could use fewer or more keys than
_bt_check_rowcompare. _bt_first now uses the same requiredness markings
as _bt_check_rowcompare for this.
Now that _bt_first and _bt_check_rowcompare agree on how to start and
end scans, we can get rid of the forcenonrequired special case, without
any risk of infinite cycling. This approach also makes row compare keys
behave more like regular scalar keys, particularly within _bt_first.
Fixing these inconsistencies necessitates dealing with a related issue
with the way that row compares were marked required by preprocessing: we
didn't mark any lower-order row members required following 2016 bugfix
commit a298a1e0. That approach was over broad. The bug in question was
actually an oversight in how _bt_check_rowcompare dealt with tuple NULL
values that failed to satisfy a scan key marked required in the opposite
scan direction (it was a bug in 2011 commits 6980f817 and 882368e8, not
a bug in 2006 commit 3a0a16cb). Go back to marking row compare members
as required using the original 2006 rules, and fix the 2016 bug in a
more principled way: by limiting use of the "set continuescan=false with
a key required in the opposite scan direction upon encountering a NULL
tuple value" optimization to the first/most significant row member key.
While it isn't safe to use an implied IS NOT NULL qualifier to end the
scan when it comes from a required lower-order row compare member key,
it _is_ generally safe for such a required member key to end the scan --
provided the key is marked required in the _current_ scan direction.
This fixes what was arguably an oversight in either commit 5f4d98d4 or
commit 8a510275. It is a direct follow-up to today's commit f09816a0.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wz=pcijHL_mA0_TJ5LiTB28QpQ0cGtT-ccFV=KzuunNDDQ@mail.gmail.com
Backpatch-through: 18
nbtree preprocessing's handling of redundant (and contradictory) keys
created problems for scans with = arrays. It was just about possible
for a scan with an = array key and one or more redundant keys (keys that
preprocessing could not eliminate due an incomplete opfamily and a
cross-type key) to get stuck. Testing has shown that infinite cycling
where the scan never manages to make forward progress was possible.
This could happen when the scan's arrays were reset in _bt_readpage's
forcenonrequired=true path (added by bugfix commit 5f4d98d4) when the
arrays weren't at least advanced up to the same point that they were in
at the start of the _bt_readpage call. Earlier redundant keys prevented
the finaltup call to _bt_advance_array_keys from reaching lower-order
keys that needed to be used to sufficiently advance the scan's arrays.
To fix, make preprocessing leave the scan's keys in a state that is as
close as possible to how it'll usually leave them (in the common case
where there's no redundant keys that preprocessing failed to eliminate).
Now nbtree preprocessing _reliably_ leaves behind at most one required
>/>= key per index column, and at most one required </<= key per index
column. Columns that have one or more = keys that are eligible to be
marked required (based on the traditional rules) prioritize the = keys
over redundant inequality keys; they'll _reliably_ be left with only one
of the = keys as the index column's only required key.
Keys that are not marked required (whether due to the new preprocessing
step running or for some other reason) are relocated to the end of the
so->keyData[] array as needed. That way they'll always be evaluated
after the scan's required keys, and so cannot prevent code in places
like _bt_advance_array_keys and _bt_first from reaching a required key.
Also teach _bt_first to decide which initial positioning keys to use
based on the same requiredness markings that have long been used by
_bt_checkkeys/_bt_advance_array_keys. This is a necessary condition for
reliably avoiding infinite cycling. _bt_advance_array_keys expects to
be able to reason about what'll happen in the next _bt_first call should
it start another primitive index scan, by evaluating inequality keys
that were marked required in the opposite-to-scan scan direction only.
Now everybody (_bt_first, _bt_checkkeys, and _bt_advance_array_keys)
will always agree on which exact key will be used on each index column
to start and/or end the scan (except when row compare keys are involved,
which have similar problems not addressed by this commit).
An upcoming commit will finish off the work started by this commit by
harmonizing how _bt_first, _bt_checkkeys, and _bt_advance_array_keys
apply row compare keys to start and end scans.
This fixes what was arguably an oversight in either commit 5f4d98d4 or
commit 8a510275.
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wz=ds4M+3NXMgwxYxqU8MULaLf696_v5g=9WNmWL2=Uo2A@mail.gmail.com
Backpatch-through: 18
The previous minimum was to maintain support for Python 3.5, but we
now require Python 3.6 anyway (commit 45363fca637), so that reason is
obsolete. A small raise to Meson 0.57 allows getting rid of a fair
amount of version conditionals and silences some future-deprecated
warnings.
With the version bump, the following deprecation warnings appeared and
are fixed:
WARNING: Project targets '>=0.57' but uses feature deprecated since '0.55.0': ExternalProgram.path. use ExternalProgram.full_path() instead
WARNING: Project targets '>=0.57' but uses feature deprecated since '0.56.0': meson.build_root. use meson.project_build_root() or meson.global_build_root() instead.
It turns out that meson 0.57.0 and 0.57.1 are buggy for our use, so
the minimum is actually set to 0.57.2. This is specific to this
version series; in the future we won't necessarily need to be this
precise.
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/42e13eb0-862a-441e-8d84-4f0fd5f6def0%40eisentraut.org
Commit c120550edb86 optimized the vacuuming of relations without
indexes (a.k.a. one-pass strategy) by directly marking dead item IDs
as LP_UNUSED. However, the periodic FSM vacuum was still checking if
dead item IDs had been marked as LP_DEAD when attempting to vacuum the
FSM every VACUUM_FSM_EVERY_PAGES blocks. This condition was never met
due to the optimization, resulting in missed FSM vacuum
opportunities.
This commit modifies the periodic FSM vacuum condition to use the
number of tuples deleted during HOT pruning. This count includes items
marked as either LP_UNUSED or LP_REDIRECT, both of which are expected
to result in new free space to report.
Back-patch to v17 where the vacuum optimization for tables with no
indexes was introduced.
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/CAD21AoBL8m6B9GSzQfYxVaEgvD7-Kr3AJaS-hJPHC+avm-29zw@mail.gmail.com
Backpatch-through: 17
When decompressing some input data, the calculation for the initial
starting point and the initial size were incorrect, potentially leading
to failures when decompressing contents with LZ4. These initialization
points are fixed in this commit, bringing the logic closer to what
exists for gzip and zstd.
The contents of the compressed data is clear (for example backups taken
with LZ4 can still be decompressed with a "lz4" command), only the
decompression part reading the input data was impacted by this issue.
This code path impacts pg_basebackup and pg_verifybackup, which can use
the LZ4 decompression routines with an archive streamer, or any tools
that try to use the archive streamers in src/fe_utils/.
The issue is easier to reproduce with files that have a low-compression
rate, like ones filled with random data, for a size of at least 512kB,
but this could happen with anything as long as it is stored in a data
folder. Some tests are added based on this idea, with a file filled
with random bytes grabbed from the backend, written at the root of the
data folder. This is proving good enough to reproduce the original
problem.
Author: Mikhail Gribkov <youzhick@gmail.com>
Discussion: https://postgr.es/m/CAMEv5_uQS1Hg6KCaEP2JkrTBbZ-nXQhxomWrhYQvbdzR-zy-wA@mail.gmail.com
Backpatch-through: 15
This commit moves all the routines related to the bytea data type into
its own new file, called bytea.c, clearing some of the bloat in
varlena.c. This includes the routines for:
- Input, output, receive and send
- Comparison
- Casts to integer types
- bytea-specific functions
The internals of the routines moved here are unchanged, with one
exception. This comes with a twist in bytea_string_agg_transfn(), where
the call to makeStringAggState() is replaced by the internals of this
routine, still located in varlena.c. This simplifies the move to the
new file by not having to expose makeStringAggState().
Author: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/CAJ7c6TMPVPJ5DL447zDz5ydctB8OmuviURtSwd=PHCRFEPDEAQ@mail.gmail.com
Prior to this patch, every FETCH call would generate a unique queryId
with a different size specified. Depending on the workloads, this could
lead to a significant bloat in pg_stat_statements, as repeatedly calling
a specific cursor would result in a new queryId each time. For example,
FETCH 1 c1; and FETCH 2 c1; would produce different queryIds.
This patch improves the situation by normalizing the fetch size, so as
semantically similar statements generate the same queryId. As a result,
statements like the below, which differ syntactically but have the same
effect, will now share a single queryId:
FETCH FROM c1
FETCH NEXT c1
FETCH 1 c1
In order to do a normalization based on the keyword used in FETCH,
FetchStmt is tweaked with a new FetchDirectionKeywords. This matters
for "howMany", which could be set to a negative value depending on the
direction, and we want to normalize the queries with enough information
about the direction keywords provided, including RELATIVE, ABSOLUTE or
all the ALL variants.
Author: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/CAA5RZ0tA6LbHCg2qSS+KuM850BZC_+ZgHV7Ug6BXw22TNyF+MA@mail.gmail.com
We stopped defining IOV_MAX on non-Windows systems in 75357ab94, on
the assumption that every non-Windows system defines it in <limits.h>
as required by X/Open. GNU Hurd, however, doesn't follow that
standard either. Put back the old logic to assume 16 if it's
not defined.
Author: Michael Banck <mbanck@gmx.net>
Co-authored-by: Christoph Berg <myon@debian.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6862e8d1.050a0220.194b8d.76fa@mx.google.com
Discussion: https://postgr.es/m/6846e0c3.df0a0220.39ef9b.c60e@mx.google.com
Backpatch-through: 16
The existing code assumed that O_RDONLY is defined as 0, but this is
not required by POSIX and is not true on GNU Hurd. We can avoid
the assumption by relying on O_ACCMODE to mask the fcntl() result.
(Hopefully, all supported platforms define that.)
Author: Michael Banck <mbanck@gmx.net>
Co-authored-by: Samuel Thibault
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6862e8d1.050a0220.194b8d.76fa@mx.google.com
Discussion: https://postgr.es/m/68480868.5d0a0220.1e214d.68a6@mx.google.com
Backpatch-through: 13
Previously, pattern matching and case mapping behavior branched based
on the provider. Refactor to use a method table, which is less
error-prone.
This is also a step toward multiple provider versions, which we may
want to support in the future.
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/2830211e1b6e6a2e26d845780b03e125281ea17b.camel%40j-davis.com
Querying the NUMA status can be quite time consuming, especially with
large shared buffers. 8cc139bec34a called numa_move_pages() once, for
all buffers, and we had to wait for the syscall to complete.
But with the chunking, introduced by 7fe2f67c7c to work around a kernel
bug, we can do CHECK_FOR_INTERRUPTS() after each chunk, allowing users
to abort the execution.
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aEtDozLmtZddARdB@msg.df7cb.de
Backpatch-through: 18
When querying NUMA status of pages in shared memory, we need to touch
the memory first to get valid results. This may trigger valgrind
reports, because some of the memory (e.g. unpinned buffers) may be
marked as noaccess.
Solved by adding a valgrind suppresion. An alternative would be to
adjust the access/noaccess status before touching the memory, but that
seems far too invasive. It would require all those places to have
detailed knowledge of what the shared memory stores.
The pg_numa_touch_mem_if_required() macro is replaced with a function.
Macros are invisible to suppressions, so it'd have to suppress reports
for the caller - e.g. pg_get_shmem_allocations_numa(). So we'd suppress
reports for the whole function, and that seems to heavy-handed. It might
easily hide other valid issues.
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aEtDozLmtZddARdB@msg.df7cb.de
Backpatch-through: 18
There's a kernel bug in do_pages_stat(), affecting systems combining
64-bit kernel and 32-bit user space. The function splits the request
into chunks of 16 pointers, but forgets the pointers are 32-bit when
advancing to the next chunk. Some of the pointers get skipped, and
memory after the array is interpreted as pointers. The result is that
the produced status of memory pages is mostly bogus.
Systems combining 64-bit and 32-bit environments like this might seem
rare, but that's not the case - all 32-bit Debian packages are built in
a 32-bit chroot on a system with a 64-bit kernel.
This is a long-standing kernel bug (since 2010), affecting pretty much
all kernels, so it'll take time until all systems get a fixed kernel.
Luckily, we can work around the issue by chunking the requests the same
way do_pages_stat() does, at least on affected systems. We don't know
what kernel a 32-bit build will run on, so all 32-bit builds use chunks
of 16 elements (the largest chunk before hitting the issue).
64-bit builds are not affected by this issue, and so could work without
the chunking. But chunking has other advantages, so we apply chunking
even for 64-bit builds, with chunks of 1024 elements.
Reported-by: Christoph Berg <myon@debian.org>
Author: Christoph Berg <myon@debian.org>
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aEtDozLmtZddARdB@msg.df7cb.de
Context: https://marc.info/?l=linux-mm&m=175077821909222&w=2
Backpatch-through: 18
Switch MSVC to use the conforming preprocessor, using the
/Zc:preprocessor option.
This allows us to drop the alternative implementation of
VA_ARGS_NARGS() for the previous "traditional" preprocessor.
This also prepares the way for enabling C11 mode in the future, which
enables the conforming preprocessor by default.
This now requires Visual Studio 2019. The installation documentation
is adjusted accordingly.
Discussion: https://www.postgresql.org/message-id/flat/01a69441-af54-4822-891b-ca28e05b215a%40eisentraut.org
The contrib module xml2/ has always been fuzzy with the cleanup of the
memory allocated by the calls internal to libxml2, even if there are
APIs in place giving a lot of control over the error behavior, all
located in the backend's xml.c.
The code paths fixed in the commit address multiple defects, while
sanitizing the code:
- In xpath.c, several allocations are done by libxml2 for
xpath_workspace, whose memory cleanup could go out of sight as it relied
on a single TRY/CATCH block done in pgxml_xpath(). workspace->res is
allocated by libxml2, and may finish by not being freed at all upon a
failure outside of a TRY area. This code is refactored so as the
TRY/CATCH block of pgxml_xpath() is moved one level higher to its
callers, which are responsible for cleaning up the contents of a
workspace on failure. cleanup_workspace() now requires a volatile
workspace, forcing as a rule that a TRY/CATCH block should be used.
- Several calls, like xmlStrdup(), xmlXPathNewContext(),
xmlXPathCtxtCompile(), etc. can return NULL on failures (for most of
them allocation failures. These forgot to check for failures, or missed
that pg_xml_error_occurred() should be called, to check if an error is
already on the stack.
- Some memory allocated by libxml2 calls was freed in an incorrect way,
"resstr" in xslt_process() being one example.
The class of errors fixed here are for problems that are unlikely going
to happen in practice, so no backpatch is done. The changes have
finished by being rather invasive, so it is perhaps not a bad thing to
be conservative and to keep these changes only on HEAD anyway.
Author: Michael Paquier <michael@paquier.xyz>
Reported-by: Karavaev Alexey <maralist86@mail.ru>
Reviewed-by: Jim Jones <jim.jones@uni-muenster.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18943-2f2a04ab03904598@postgresql.org
Commit 19d8e2308bc added enum values with the prefix TU_, but a few
comments still referred to TUUI_, which was used in development
versions of the patches committed as 19d8e2308bc.
Author: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/20250701110216.8ac8a9e4c6f607f1d954f44a@sraoss.co.jp
Backpatch-through: 16
This commit fixes some defects in the backend's xml.c, found upon
inspection of the internals of libxml2:
- xmlEncodeSpecialChars() can fail on malloc(), returning NULL back to
the caller. xmltext() assumed that this could never happen. Like other
code paths, a TRY/CATCH block is added there, covering also the fact
that cstring_to_text_with_len() could fail a memory allocation, where
the backend would miss to free the buffer allocated by
xmlEncodeSpecialChars().
- Some libxml2 routines called in xmlelement() can return NULL, like
xmlAddChildList() or xmlTextWriterStartElement(). Dedicated errors are
added for them.
- xml_xmlnodetoxmltype() missed that xmlXPathCastNodeToString() can fail
on an allocation failure. In this case, the call can just be moved to
the existing TRY/CATCH block.
All these code paths would cause the server to crash. As this is
unlikely a problem in practice, no backpatch is done. Jim and I have
caught these defects, not sure who has scored the most. The contrib
module xml2/ has similar defects, which will be addressed in a separate
change.
Reported-by: Jim Jones <jim.jones@uni-muenster.de>
Reviewed-by: Jim Jones <jim.jones@uni-muenster.de>
Discussion: https://postgr.es/m/aEEingzOta_S_Nu7@paquier.xyz
The current code in resolve_column_ref (dating to commits 01f7d2990
and fe24d7816) believes that not finding a RECFIELD datum is a
can't-happen case, in consequence of which I didn't spend a whole lot
of time considering what to do if it did happen. But it turns out
that it *can* happen if the would-be field name is a fully-reserved
PL/pgSQL keyword. Change the error message to describe that
situation, and add a test case demonstrating it.
This might need further refinement if anyone can find other ways to
trigger a failure here; but without an example it's not clear what
other error to throw.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://postgr.es/m/2185258.1745617445@sss.pgh.pa.us
On close inspection, there does not seem to be a strong reason
why these should be fully-reserved keywords. I guess they just
escaped consideration in previous attempts to minimize PL/pgSQL's
list of reserved words.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://postgr.es/m/2185258.1745617445@sss.pgh.pa.us
This provides a convenient way to look up a database's OID. For
example, the query
SELECT * FROM pg_shdepend
WHERE dbid = (SELECT oid FROM pg_database
WHERE datname = current_database());
can now be simplified to
SELECT * FROM pg_shdepend
WHERE dbid = current_database()::regdatabase;
Like the regrole type, regdatabase has cluster-wide scope, so we
disallow regdatabase constants from appearing in stored
expressions.
Bumps catversion.
Author: Ian Lawrence Barwick <barwick@gmail.com>
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aBpjJhyHpM2LYcG0%40nathan
If the method is called in scalar context and we didn't pass in a stderr
handle, one won't be created. However, some error paths assume that it
exists, so in this case create a dummy stderr to avoid the resulting
perl error.
Per gripe from Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru> and
adapted from his patch.
Discussion: https://postgr.es/m/378eac5de4b8ecb5be7bcdf2db9d2c4d@postgrespro.ru
Previously, tab completion for COPY only suggested plain tables
and partitioned tables, even though materialized views are also
valid for COPY TO (since commit 534874fac0b), and foreign tables
are valid for COPY FROM.
This commit enhances tab completion for COPY to also include
materialized views and foreign tables.
Views with INSTEAD OF INSERT triggers are supported with
COPY FROM but rarely used, so plain views are intentionally
excluded from completion.
Author: jian he <jian.universality@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/CACJufxFxnSkikp+GormAGHcMTX1YH2HRXW1+3dJM9w7yY9hdsg@mail.gmail.com
When postfix operators where dropped in 1ed6b8956, the CREATE OPERATOR
docs were not updated to make the RIGHTARG argument mandatory in the
grammar.
While at it, make the RIGHTARG docs more concise. Also, the operator
docs were mentioning "infix" in the introduction, while using "binary"
everywhere else.
Author: Christoph Berg <myon@debian.org>
Discussion: https://www.postgresql.org/message-id/flat/aAtpbnQphv4LWAye@msg.df7cb.de
Commit 1546e17f9d0 accidentally misspelled additionally as
additionaly. Backpatch to v17 to match where the original
commit was backpatched.
Author: Daniel Gustafsson <daniel@yesql.se>
Backpatch-through: 17
This commit refactors the vacuum routines that rely on VacuumParams,
adding const markers where necessary to force a new policy in the code.
This structure should not use a pointer as it may be used across
multiple relations, and its contents should never be updated.
vacuum_rel() stands as an exception as it touches the "index_cleanup"
and "truncate" options.
VacuumParams has been introduced in 0d831389749a, and 661643dedad9 has
fixed a bug impacting VACUUM operating on multiple relations. The
changes done in tableam.h break ABI compatibility, so this commit can
only happen on HEAD.
Author: Shihao Zhong <zhong950419@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Junwang Zhao <zhjwpku@gmail.com>
Discussion: https://postgr.es/m/CAGRkXqTo+aK=GTy5pSc-9cy8H2F2TJvcrZ-zXEiNJj93np1UUw@mail.gmail.com
log_line_prefix is changed to include "%b", the backend type in the TAP
test configuration. %v and %x are removed from the CI configuration,
with the format around %b changed.
The lack of backend type in postgresql.conf set by Cluster.pm for the
TAP test configuration was something that has been bugging me, beginning
the discussion that has led to this change. The change in the CI has
come up during the discussion, to become consistent with pg_regress.c,
%v and %x not being that useful to have.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aC0VaIWAXLgXcHVP@paquier.xyz
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.