10f66468475 intended to limit the value of io_combine_limit to the minimum of
io_combine_limit and io_max_combine_limit. To avoid issues with interdependent
GUCs, it introduced io_combine_limit_guc and set io_combine_limit in assign
hooks. That plan was thwarted by guc_tables.c accidentally still referencing
io_combine_limit, instead of io_combine_limit_guc. That lead to the GUC
machinery overriding the work done in the assign hooks, potentially leaving
io_combine_limit with a too high value.
The consequence of this bug was that when running with io_combine_limit >
io_combine_limit_guc the AIO machinery would not have reserved large enough
iovec and IO data arrays, with one IO's arrays overlapping with another IO's,
leading to total confusion.
To make such a problem easier to detect in the future, add assertions to
pgaio_io_set_handle_data_* checking the length is smaller than
io_max_combine_limit (not just PG_IOV_MAX).
It'd be nice to have a few tests for this, but it's not entirely obvious how
to do so portably.
As remarked upon by Tom, the GUC assignment hooks really shouldn't set the
underlying variable, that's the job of the GUC machinery. Change that as well.
Discussion: https://postgr.es/m/c5jyqnuwrpigd35qe7xdypxsisdjrdba5iw63mhcse4mzjogxo@qdjpv22z763f
pgaio_io_reclaim() reset the fields in PgAioHandle before updating the state
to IDLE or incrementing the generation. For most things that's OK, but for
pg_get_aios() it is not - if it copied the PgAioHandle while fields were being
reset, we wouldn't detect that and could call
pgaio_io_get_target_description() with ioh->target == PGAIO_TID_INVALID,
leading to a crash.
Fix this issue by incrementing the generation and state earlier, before
resetting.
Also add an assertion to pgaio_io_get_target_description() for the target to
be valid - that'd have made this case a bit easier to debug. While at it,
add/update a few related assertions.
Author: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/062daca9-dfad-4750-9da8-b13388301ad9@gmail.com
Before commit a0ed19e0a9e there was a cast around these, but the cast
inadvertently changed the signedness, but that made the format
placeholder correct. Commit a0ed19e0a9e removed the casts, so now the
format placeholders had the wrong signedness.
A correct cocktail of COPY FROM, SELECT and/or DML queries and
\syncpipeline was able to break the logic in charge of discarding
results of a pipeline, done in discardAbortedPipelineResults(). Such
sequence make the backend generate a FATAL, due to a protocol
synchronization loss.
This problem comes down to the fact that we did not consider the case of
libpq returning a PGRES_FATAL_ERROR when discarding the results of an
aborted pipeline. The discarding code is changed so as this result
status is handled as a special case, with the caller of
discardAbortedPipelineResults() being responsible for consuming the
result.
A couple of tests are added to cover the problems reported, bringing an
interesting gain in coverage as there were no tests in the tree covering
the case of protocol synchronization loss.
Issue introduced by 41625ab8ea3d.
Reported-by: Alexander Kozhemyakin <a.kozhemyakin@postgrespro.ru>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/ebf6ce77-b180-4d6b-8eab-71f641499ddf@postgrespro.ru
Not having this check would produce a core dump at startup when running
pgstat_read_statsfile(), in the case where the information of a stats
kind for an entry in the dshash could not be found. The same check
already happens for fixed-numbered stats and entries that are stored
with their names. This issue can be seen with custom stats kinds.
Note that this problem can be reproduced what what is in the core code:
- Tweak the test module injection_points to not load the fixed-numbered
stats part, leaving only the variable-numbered stats.
- Create an instance with injection_points defined in
shared_preload_libraries.
- Create a pgstats entry by attaching and running a point.
- Restart the server without shared_preload_libraries. The startup
process detects that something is wrong and reports a WARNING.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aAieZAvM+K1d89R2@ip-10-97-1-34.eu-west-3.compute.internal
One place in hash_create() used DynaHashAlloc() as a convenient
shorthand for MemoryContextAlloc(). That was fine when it was
written, but it stopped being fine when 9c911ec06 changed
DynaHashAlloc() to use MCXT_ALLOC_NO_OOM (mea culpa). Change
the code to call plain MemoryContextAlloc() as intended.
I think that this bug may be unreachable in practice, since we now
always create AllocSets with some space already allocated, so that
an OOM failure here for a non-shared hash table should be impossible
(with a hash table name of reasonable length anyway). And there
aren't enough shared hash tables to make a crash for one of those
probable. Nonetheless it's clearly not operating as designed, so
back-patch to v16 where 9c911ec06 came in.
Reported-by: Maksim Korotkov <m.korotkov@postgrespro.ru>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/219bdccd460510efaccf90b57e5e5ef2@postgrespro.ru
Backpatch-through: 16
RHEL8 ships a patched 3.6.8 as its base Python version, and I
accidentally let some newer Python-isms creep into oauth_server.py
during development.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Tested-by: Renan Alves Fonseca <renanfonseca@gmail.com>
Tested-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/16098.1745079444%40sss.pgh.pa.us
b85a9d046efd introduced a new RelIdToTypeIdCacheHash, whose entries should
exist for typecache entries with TCFLAGS_HAVE_PG_TYPE_DATA flag set or any
of TCFLAGS_OPERATOR_FLAGS set or tupDesc set. However, TypeCacheOpcCallback(),
which resets TCFLAGS_OPERATOR_FLAGS, was forgotten to update
RelIdToTypeIdCacheHash.
This commit adds a delete_rel_type_cache_if_needed() call to the
TypeCacheOpcCallback() function to maintain RelIdToTypeIdCacheHash after
resetting TCFLAGS_OPERATOR_FLAGS.
Also, this commit fixes the name of the delete_rel_type_cache_if_needed()
function in its mentions in the comments.
Reported-by: Noah Misch
Discussion: https://postgr.es/m/20250411203241.e9.nmisch%40google.com
To estimate with extended statistics, we need to clear the varnullingrels
field in the expression, and duplicates are not allowed in the GroupVarInfo
list. We might re-use add_unique_group_var(), but we don't do so for two
reasons.
1) We must keep the origin_rinfos list ordered exactly the same way as
varinfos.
2) add_unique_group_var() is designed for estimate_num_groups(), where a
larger number of groups is worse. While estimating the number of hash
buckets, we have the opposite: a lesser number of groups is worse.
Therefore, we don't have to remove "known equal" vars: the removed var
may valuably contribute to the multivariate statistics to grow the number
of groups.
This commit adds custom code to estimate_multivariate_bucketsize() to
initialize varinfos properly.
Reported-by: Robins Tharakan <tharakan@gmail.com>
Discussion: https://postgr.es/m/18885-da51324078588253%40postgresql.org
Author: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
When a foreign key constraint is placed on a partitioned table, we
actually make two pg_constraint entries associated with that table.
(I have my doubts about the wisdom of that, but it's been like that
since v12 and post-feature-freeze is no time to be messing with such
entrenched decisions.) The second "child" entry always had a name
generated according to the default rule, "table_column(s)_fkey[nnn]",
even if the primary entry had an unrelated user-specified name. The
trouble with doing that is that the default name could collide with
the user-specified name of some other constraint on the same table.
While we were willing to adjust the generated name to avoid
collisions, that only helps if it's made second; if it's made first
then creation of the other constraint would fail, potentially causing
dump/reload or pg_upgrade failures.
The core of the problem here is that we're infringing on user
namespace, so I doubt that there's any 100% solution other than to
find a way to not need the "child" entry. In the meantime, it seems
like it'd be an improvement to make the child's name be the name of
the parent constraint with an underscore and digit(s) appended as
necessary to make it unique. This rule can in theory fail in the same
way, but it seems much less probable; for one thing, this rule is
guaranteed not to match primary entries having auto-generated names.
(While an auto-generated primary name isn't user-specified to begin
with, it acts like that during dump/reload, so collisions against such
names are definitely possible.)
An additional bonus, visible in some of the regression test cases
that change here, arises from the fact that some error messages
cite the child constraint's name not the parent's. In the
previous approach the two names could be completely unrelated,
leading to user confusion --- the more so since psql's \d command
hides child constraints. With this approach it's hopefully much
clearer which constraint-the-user-knows-about is failing.
However, that does mean that there's user-visible behavior change
occurring here, making it seem like not something to back-patch.
I feel it's not too late for v18, though.
Reported-by: Kirill Reshke <reshkekirill@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Alvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CALdSSPhGitjpTfzEMJN-Y2x+Q-5QChSxAsmSJ1-E8mQJLkHOqQ@mail.gmail.com
The stack allocated JsonLexContexts, in combination with codepaths
using goto, were causing warnings when compiling with LTO enabled
as the optimizer is unable to figure out that is safe. Rather than
contort the code with workarounds for this simply heap allocate the
structs instead as these are not in any performance critical paths.
Author: Daniel Gustafsson <daniel@yesql.se>
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2074634.1744839761@sss.pgh.pa.us
The routine was coded so as a WAL sender was always used, state required
only for one failure test related to START_REPLICATION. This test is
changed so as a WAL sender is used by passing a replication option to
psql_fails_like(), instead of forcing the use of a WAL sender for all
the tests.
This has come up as useful in the context of a separate bug fix where
we are looking at extending tests for some failure scenarios. These
tests need to happen in the context of a normal backend, and not a WAL
sender where the extended query protocol cannot be used.
Discussion: https://postgr.es/m/aAXkJIOildLUA7vQ@paquier.xyz
Commit 3f28b2fcac tried to ensure that the replication origin shouldn't be
advanced in case of an ERROR in the apply worker, so that it can request
the same data again after restart. However, it is possible that an ERROR
was caught and handled by a (say PL/pgSQL) function, and the apply worker
continues to apply further changes, in which case, we shouldn't reset the
replication origin.
Ensure to reset the origin only when the apply worker exits after an
ERROR.
Commit 3f28b2fcac added new function geterrlevel, which we removed in HEAD
as part of this commit, but kept it in backbranches to avoid breaking any
applications. A separate case can be made to have such a function even for
HEAD.
Reported-by: Shawn McCoy <shawn.the.mccoy@gmail.com>
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 16, where it was introduced
Discussion: https://postgr.es/m/CALsgZNCGARa2mcYNVTSj9uoPcJo-tPuWUGECReKpNgTpo31_Pw@mail.gmail.com
This assertion, based on pending_since (timestamp used to prevent stats
reports to be too frequent or should a partial flush happen), is reached
when it is found that no data can be flushed but a previous call of
pgstat_report_stat() determined that some stats data has been found as
in need of a flush. So pending_since is set when some stats data is
pending (in non-force mode) or if report attempts are too frequent, and
reset to 0 once all stats have been flushed.
Since 5cbbe70a9cc6, WAL senders have begun to report their stats on a
periodic basis for IO stats in v16~ and backend stats on HEAD, creating
some friction with the concurrent pgstat_report_stat() calls that can
happen in the context of a WAL sender (shutdown callback doing a final
report or backend-related code paths). This problem is the cause of
spurious failures in the TAP tests.
In theory, this assertion can be also reached in v15, even if that's
very unlikely. For example, a process, say a background worker, could
do periodic and direct stats flushes with concurrent calls of
pgstat_report_stat() that could cause conflicting values of
pending_since. This can be done with WAL or SLRU stats flushes using
pgstat_flush_wal() or pgstat_slru_flush(). HEAD makes this situation
easier to happen with custom cumulative stats.
This commit removes the assertion altogether, per discussion, as it is
more useful to keep the state of things as they are for the WAL sender.
The assertion could use a special state based on for example
am_walsender, but I doubt that this would be meaningful in the long run
based on the other arguments raised while discussing this issue.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/1489124.1744685908@sss.pgh.pa.us
Discussion: https://postgr.es/m/dwrkeszz6czvtkxzr5mqlciy652zau5qqnm3cp5f3p2po74ppk@omg4g3cc6dgq
Backpatch-through: 15
Cluster.pm's connect_fails routine has long had the ability to
sniff the postmaster log file for expected messages after a
connection failure. However, that's always had a race condition:
on some platforms it's possible for psql to exit and the test
script to slurp up the postmaster log before the backend process
has been able to write out its final log messages. Back in
commit 55828a6b6 we disabled a bunch of tests after discovering
that, and the aim of this patch is to re-enable them.
(The sibling function connect_ok doesn't seem to have a similar
problem, mainly because the messages we look for come out during
the authentication handshake, so that if psql reports successful
connection they should certainly have been emitted already.)
The solution used here is borrowed from 002_connection_limits.pl's
connect_fails_wait routine: set the server's log_min_messages setting
to DEBUG2 so that the postmaster will log child-process exit, and then
wait till we see that log entry before checking for the messages we
are actually interested in.
If a TAP test uses connect_fails' log_like or log_unlike options, and
forgets to set log_min_messages, those connect_fails calls will now
hang until timeout. Fixing up the existing callers shows that we had
several other TAP tests that were in theory vulnerable to the same
problem. It's unclear whether the lack of failures is just luck, or
lack of buildfarm coverage, or perhaps there is some obscure timing
effect that only manifests in SSL connections. In any case, this
change should in principle make those other call sites more robust.
I'm not inclined to back-patch though, unless sometime we observe
an actual failure in one of them.
Reported-by: Andrew Dunstan <andrew@dunslane.net>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/984fca80-85a8-4c6f-a5cc-bb860950b435@dunslane.net
While heapam reproduces the insertion order of rows well, updates
can move rows to varying places depending on autovacuum activity.
In most regression tests we've guarded against getting variable
results due to that, but float4.sql and float8.sql had escaped
notice so far because they update tables that are too small for
autovacuum to pay attention to.
With increasing interest in non-heap table AMs, it seems worth
allowing for update behaviors that are not like heapam's. Hence,
add ORDER BY to stabilize the results in case the updates put
the rows in a different order. (We'll continue to assume that a
seqscan will reproduce original insertion order, though. Removing
that assumption would require vastly-more-invasive test changes.)
Author: Pavel Borisov <pashkin.elfe@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CALT9ZEExHAnBoBVQzQuWPMKUbapF5-FBO3fdeYG3s2tuWQz1NQ@mail.gmail.com
This error message was 'runaway "struct_name"', which isn't all
that clear; I think 'could not find closing brace for "struct_name"'
is better. Also, provide the location of the struct start using the
script's usual '$file:$lineno' style.
Bug: #18901
Reported-by: Clemens Ruck <clemens.ruck@t-online.de>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18901-424272abe01357e6@postgresql.org
This injection point was named "AtEOXact_Inval-with-transInvalInfo", not
respecting the implied naming convention that injection points should
use lower-case characters, with terms separated by dashes. All the
other points defined in the tree follow this style, so let's be more
consistent.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://postgr.es/m/OSCPR01MB14966E14C1378DEE51FB7B7C5F5B32@OSCPR01MB14966.jpnprd01.prod.outlook.com
Backpatch-through: 17
Word boundaries are based on whether a character is alphanumeric or
not. For the PG_UNICODE_FAST collation, alphanumeric includes
non-ASCII digits; whereas for the PG_C_UTF8 collation, it only
includes digits 0-9. Pass down the right information from the
pg_locale_t into initcap_wbnext to differentiate the behavior.
Reported-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20250417135841.33.nmisch@google.com
exec_replication_command created a cmd_context to work in and
then deleted it on exit. This is pretty dangerous because
some replication commands start/finish transactions. In the
wake of commit 1afe31f03, that could lead to re-selecting a
CurrentMemoryContext that's already been deleted, leading to
hilarity such as a memory context that is its own parent.
To fix, let's make the cmd_context persist across
exec_replication_command calls; instead of deleting it, we'll just
reset it each time. In this way it retains the same identity and
there's no problem if transaction abort restores it as the working
context. It probably even saves a few microseconds to do this.
This fix also ensures that exec_replication_command returns to the
caller (PostgresMain) with the same context active that had been
when it was called (probably MessageContext). The previous
coding could get that wrong too.
Reported-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAO6_XqoJA7-_G6t7Uqe5nWF3nj+QBGn4F6Ptp=rUGDr0zo+KvA@mail.gmail.com
The case of "node == parent" might seem impossible, since we just
allocated the new node. But it's possible if parent is a dangling
reference to a recently-deleted context. In fact, given aset.c's
habit of recycling contexts, it's actually rather likely if that's so.
If we'd had this assertion before, it would have simplified debugging
a recently-identified walsender issue.
Reported-by: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAO6_XqoJA7-_G6t7Uqe5nWF3nj+QBGn4F6Ptp=rUGDr0zo+KvA@mail.gmail.com
Blocking checkpoint phase 2 requires MarkBufferDirty() and
BUFFER_LOCK_EXCLUSIVE; neither suffices by itself. transam/README documents
this, citing SyncOneBuffer(). Update the DELAY_CHKPT_START documentation to
say this. Expand the heap_inplace_update_and_unlock() comment that cites
XLogSaveBufferForHint() as precedent, since heap_inplace_update_and_unlock()
could have opted not to use DELAY_CHKPT_START.
Commit 8e7e672cdaa6bfec85d4d5dd9be84159df23bb41 added DELAY_CHKPT_START to
heap_inplace_update_and_unlock(). Since commit
bc6bad88572501aecaa2ac5d4bc900ac0fd457d5 reverted it in non-master branches,
no back-patch.
Discussion: https://postgr.es/m/20250406180054.26.nmisch@google.com
v14 commit 1f95181b44c843729caaa688f74babe9403b5850 and its v13
equivalent caused timing-dependent failures in archive recovery, at
restartpoints. The symptom was "invalid magic number 0000 in log
segment X, offset 0", "unexpected pageaddr X in log segment Y, offset 0"
[X < Y], or an assertion failure. Commit
3635a0a35aafd3bfa80b7a809bc6e91ccd36606a and predecessors back-patched
v15 changes to fix that. This test reproduces the problem
probabilistically, typically in less than 1000 iterations of the test.
Hence, buildfarm and CI runs would have surfaced enough failures to get
attention within a day.
Reported-by: Arun Thirupathi <arunth@google.com>
Discussion: https://postgr.es/m/20250306193013.36.nmisch@google.com
Backpatch-through: 13
Commit 7102070329d8147246d2791321f9915c3b5abf31 fixed a similar bug, but
it missed the case of database-wide ANALYZE ("use_own_xacts" mode).
Commit a07e03fd8fa7daf4d1356f7cb501ffe784ea6257 changed consequences
from silent discard of a pg_class stats (relpages et al.) update to
ERROR "tuple to be updated was already modified". Losing a relpages
update of an ON COMMIT DELETE ROWS table was negligible, but a
COMMIT-time error isn't negligible. Back-patch to v13 (all supported
versions).
Reported-by: Richard Guo <guofenglinux@gmail.com
Reported-by: Robins Tharakan <tharakan@gmail.com>
Discussion: https://postgr.es/m/CAMbWs4-XwMKMKJ_GT=p3_-_=j9rQSEs1FbDFUnW9zHuKPsPNEQ@mail.gmail.com
Backpatch-through: 13
1349d2790 added support so that aggregate functions with an ORDER BY or
DISTINCT clause could make use of presorted inputs to avoid an implicit
sort within nodeAgg.c. That commit failed to consider that a FILTER
clause may exist that filters rows before the aggregate function
arguments are evaluated. That can be problematic if an aggregate
argument contains an expression which could error out during evaluation.
It's perfectly valid to want to have a FILTER clause which eliminates
such values, and with the pre-sorted path added in 1349d2790, it was
possible that the planner would produce a plan with a Sort node above
the Aggregate to perform the sort on the aggregate's arguments long before
the Aggregate node would filter out the non-matching values.
Here we fix this by inspecting ORDER BY / DISTINCT aggregate functions
which have a FILTER clause to see if the aggregate's arguments are
anything more complex than a Var or a Const. Evaluating these isn't
going to cause an error. If we find any non-Var, non-Const parameters
then the planner will now opt to perform the sort in the Aggregate node
for these aggregates, i.e. disable the presorted aggregate optimization.
An alternative fix would have been to completely disallow the presorted
optimization for Aggrefs with any FILTER clause, but that wasn't done as
that could cause large performance regressions for queries that see
significant gains from 1349d2790 due to presorted results coming in from
an Index Scan.
Backpatch to 16, where 1349d2790 was introduced
Author: David Rowley <dgrowleyml@gmail.com>
Reported-by: Kaimeh <kkaimeh@gmail.com>
Diagnosed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAK-%2BJz9J%3DQ06-M7cDJoPNeYbz5EZDqkjQbJnmRyQyzkbRGsYkA%40mail.gmail.com
Backpatch-through: 16
Compared to v17 with only \bind able to do extended query protocol work,
v18 has now a total of 11 meta-commands related to the extended query
protocol. These were all listed under the "General" section of the
--help=commands output and are specialized, bloating the output
generated.
All these meta-commands are moved into a new section called "Extended
Query Protocol", listed at the end of --help=commands.
This split has been suggested by Noah Misch.
Discussion: https://postgr.es/m/20250415213450.1f.nmisch@google.com
Noah has reported that the current wording was confusing compared to the
description of the underlying libpq routine. The new wording is from
me.
Reported-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20250415213450.1f.nmisch@google.com
When an invalid number of results is requested for \getresults, the
status code returned by exec_command_getresults() was PSQL_CMD_SKIP_LINE
and not PSQL_CMD_ERROR.
This led to incorrect behaviors, with ON_ERROR_STOP for example.
Reported-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20250415213450.1f.nmisch@google.com
The format of the injection point names used by the AIO code does not
match the existing naming convention used everywhere else in the code,
so let's be consistent. These points are used in test_aio.
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Discussion: https://postgr.es/m/Z_yTB80bdu1sYDqJ@paquier.xyz
Commit 173c97812ff replaced the hardcoded "global/pg_control" in pg_upgrade
log message with a string literal concatenation of XLOG_CONTROL_FILE macro.
However, this change made the message untranslatable.
This commit fixes the issue by using %s with XLOG_CONTROL_FILE instead of
that literal concatenation, allowing the message to be translated properly.
It also wraps the file path in double quotes for consistency with similar
log messages.
Author: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: Masao Fujii <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/20250407.155546.2129693791769531891.horikyota.ntt@gmail.com
Use appendPQExpBufferStr when there are no parameters and
appendPQExpBufferChar when the string length is 1.
Unlike 3fae25cbb, which fixed this issue for code that was new to v18,
this one fixes up instances which exist in the backbranches. We've
historically tried to maintain this standard and if we're going to
continue doing that, then we won't be doing that selectively based on
when the code was introduced. Now seems like a good time to flush out the
existing misuses. Waiting until v19 just prolongs their existence in
terms of released versions that the misuses exist in.
Author: David Rowley <drowleyml@gmail.com>
Discussion: https://postgr.es/m/CAApHDvoARMvPeXTTC0HnpARBHn-WgVstc8XFCyMGOzvgu_1HvQ@mail.gmail.com
Both pg_get_process_memory_contexts() and pg_backend_memory_contexts
have 1-based levels, whereas pg_log_backend_memory_contexts() was using
0-based levels. Align these.
This results in slightly saner behavior from MemoryContextStatsDetail()
in regards to the max_level. Previously it would stop at 1 level before
the maximum requested level rather than at that level.
Reported-by: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Author: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Author: David Rowley <drowleyml@gmail.com
Reviewed-by: Melih Mutlu <m.melihmutlu@gmail.com>
Reviewed-by: Rahila Syed <rahilasyed90@gmail.com>
Discussion: https://postgr.es/m/395ea5d4fe190480efa95bf533485c70@oss.nttdata.com
The "children" list won't be used until "got_children" has been set
true, but older compilers don't get that; about half a dozen
buildfarm animals are warning about this. Issue added by 11ff192b5.
While here, improve slightly-shaky grammar in comment.
Discussion: https://postgr.es/m/2057835.1744833309@sss.pgh.pa.us
This gets rid of repetitive get_typlen calls in postquel_sub_params,
which show up as costing a few percent of the runtime in simple test
cases (more with more parameters).
In combination with the preceding patches, this gets us most of the
way back down to the amount of per-call overhead that functions.c
had before commit 0dca5d68d. There are some more things that could
be done, but this seems like an okay place to stop for v18.
At this point, the only data structures we allocate directly in
fcontext are the SQLFunctionCache struct itself, the ParamListInfo
struct, and the execution_state array, all of which are small and
perfectly capable of being re-used across executions of the same
FmgrInfo. Hence, let's give them the same lifespan as the FmgrInfo.
This step gets rid of the separate SQLFunctionLink struct and makes
fn_extra point to SQLFunctionCache again. We also get rid of the
separate fcontext memory context and allocate these items directly
in fn_mcxt.
For notational simplicity, SQLFunctionCache still has an fcontext
field, but it's just a copy of fn_mcxt.
The motivation for this is to allow these structures to live as
long as the FmgrInfo and be re-used across calls, restoring the
original design without its propensity for memory leaks. This
gets rid of some per-call overhead that we added in 0dca5d68d.
We also make an effort to re-use the JunkFilter and result slot.
Those might need to change if the function definition changes,
so we compromise by rebuilding them if the cached plan changes.
This also moves the tuplestore into fn_mcxt so that it can be
re-used across calls, again undoing a change made in 0dca5d68d.
Put the JunkFilter and its result slot (and thence also
some subsidiary data such as the result tupledesc) into a
separate subcontext "jfcontext". This doesn't accomplish
a lot at this point, because we make a new JunkFilter each
time through the SQL function. However, the plan is to make
the fcontext long-lived, and that raises the possibility
that we'll need a new JunkFilter because the plan for the
result-generating query changes. A separate context makes
it easy to free the obsoleted data when that happens.
Also, instead of always running the sub-executor in fcontext,
make a separate context for it if we're doing lazy eval of
a SRF, and otherwise just run it inside CurrentMemoryContext.
Previously, much of this code ran with CurrentMemoryContext set
to be the function's fcontext, so that we tended to leak a lot of
stuff there. Commit 0dca5d68d dealt with that by releasing the
fcontext at the completion of each SQL function call, but we'd
like to go back to the previous approach of allowing the fcontext
to be query-lifespan. To control the leakage problem, rearrange
the code so that we mostly run in the memory context that fmgr_sql
is called in (which we expect to be short-lived). Notably, this
means that parsing/planning is all done in the short-lived context
and doesn't leak cruft into fcontext.
This patch also fixes the allocation of execution_state records
so that we don't leak them across executions. I set that up
with a re-usable array that contains at least as many
execution_state structs as we need for the current querytree.
The chain structure is still there, but it's not really doing
much for us, and maybe somebody will be motivated to get rid
of it. I'm not though.
This incidentally also moves the call of BlessTupleDesc to be
with the code that creates the JunkFilter. That doesn't make
much difference now, but a later patch will reduce the number
of times the JunkFilter gets made, and we needn't bless the
results any more often than that.
We still leak a fair amount in fcontext, particularly when
executing utility statements, but that's material for a
separate patch step; the point here is only to get rid of
unintentional allocations in fcontext.
Late in the development of commit 0dca5d68d, I added a step to copy
the result tlist we extract from the cached final query, because
I was afraid that that might not last as long as the JunkFilter that
we're passing it off to. However, that turns out to cost a noticeable
number of cycles, and it's really quite unnecessary because the
JunkFilter will not examine that tlist after it's been created.
(ExecFindJunkAttribute would use it, but we don't use that function
on this JunkFilter.) Hence, remove the copy step. For safety,
reset the might-become-dangling jf_targetList pointer to NIL.
In passing, remove DR_sqlfunction.cxt, which we don't use anymore;
it's confusing because it's not entirely clear which context it
ought to point at.
Commit 0bada39c83a150079567a6e97b1a25a198f30ea3 fixed a bug of this kind,
which existed in all branches for six days before detection. While the
probability of reaching the trouble was low, the disruption was extreme. No
new backends could start, and service restoration needed an immediate
shutdown. Hence, add this to catch the next bug like it.
The new check in RelationIdGetRelation() suffices to make autovacuum detect
the bug in commit 243e9b40f1b2dd09d6e5bf91ebf6e822a2cd3704 that led to commit
0bada39. This also checks in a number of similar places. It replaces each
Assert(IsTransactionState()) that pertained to a conditional catalog read.
No back-patch for now, but a back-patch of commit 243e9b4 should back-patch
this, too. A back-patch could omit the src/test/regress changes, since back
branches won't gain new index columns.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/20250410191830.0e.nmisch@google.com
Discussion: https://postgr.es/m/10ec0bc3-5933-1189-6bb8-5dec4114558e@gmail.com
The end callback for ZStandard compression frees the private_data
but didn't set the pointer to NULL after freeing. This is not a
bug as the code is right now, since nothing is dereferencing the
pointer upon returning from the callback but it is good practice
to do.
Author: Alexander Kuznetsov <kuznetsovam@altlinux.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/efaee52b-9550-44ca-8633-ea86076b3283@altlinux.org
In pg_dump and pg_restore, _allocAH() calls _discoverArchiveFormat() to
determine the archive format when the input format is unknown one.
If the input or discovered format is unrecognized, it reports an error
including the archive format number.
If discovered format is unrecognized, its number should be shown in
the error message. But previously the error message mistakenly showed
the originally requested format number (i.e., unknown one) instead of
the discovered one, due to referencing the wrong variable in the error
message.
This commit corrects the issue by using the appropriate variable in
the error message.
This fix has no practical impact since _discoverArchiveFormat() never
returns an unrecognized format and that error mesasge is actually
never output. Therefore, while the issue exists in back branches,
it's not worth the trouble and buildfarm cycles to back-patch.
So this fix is applied only to the master branch.
Author: Mahendra Singh Thalor <mahi6run@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CAKYtNAqu+N-Ab2Fq6wzNSOm_-0N-BMneanYNV1+6kFDXjva1Eg@mail.gmail.com