With GB18030 as source encoding, applications could crash the server via
SQL functions convert() or convert_from(). Applications themselves
could crash after passing unterminated GB18030 input to libpq functions
PQescapeLiteral(), PQescapeIdentifier(), PQescapeStringConn(), or
PQescapeString(). Extension code could crash by passing unterminated
GB18030 input to jsonapi.h functions. All those functions have been
intended to handle untrusted, unterminated input safely.
A crash required allocating the input such that the last byte of the
allocation was the last byte of a virtual memory page. Some malloc()
implementations take measures against that, making the SIGSEGV hard to
reach. Back-patch to v13 (all supported versions).
Author: Noah Misch <noah@leadboat.com>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Backpatch-through: 13
Security: CVE-2025-4207
Start the file with static functions not specific to pe_test_vectors
tests. This way, new tests can use them without disrupting the file's
layout. Change report_result() PQExpBuffer arguments to plain strings.
Back-patch to v13 (all supported versions), for the next commit.
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Backpatch-through: 13
Security: CVE-2025-4207
fc069a3a6319 implemented Self-Join Elimination (SJE) and put related logic
to ChangeVarNodes_walker(). This commit provides refactoring to remove the
SJE-related logic from ChangeVarNodes_walker() but adds a custom callback to
ChangeVarNodesExtended(), which has a chance to process a node before
ChangeVarNodes_walker(). Passing this callback to ChangeVarNodesExtended()
allows SJE-related node handling to be kept within the analyzejoins.c.
Reported-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAMbWs49PE3CvnV8vrQ0Dr%3DHqgZZmX0tdNbzVNJxqc8yg-8kDQQ%40mail.gmail.com
Author: Andrei Lepikhov <lepihov@gmail.com>
Author: Alexander Korotkov <aekorotkov@gmail.com>
The previous code was allocating more memory and copying more data than
necessary because it specified the wrong PgStat_KindInfo member as the
size argument for MemoryContextAlloc and memcpy, respectively.
Although these issues exist since 5891c7a8e, there have been no reports
from the field. So for now, it seems sufficient to fix them in master.
Author: Etsuro Fujita <etsuro.fujita@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Gurjeet Singh <gurjeet@singh.im>
Discussion: https://postgr.es/m/CAPmGK15eTRCZTnfgQ4EuBNo%3DQLYGFEbXS_7m2dXqtkcT7L8qrQ%40mail.gmail.com
Also adjust the phrasing in the comments.
Author: Etsuro Fujita <etsuro.fujita@gmail.com>
Author: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Reviewed-by: Gurjeet Singh <gurjeet@singh.im>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAPmGK17%3DPHSDZ%2B0G6jcj12buyyE1bQQc3sbp1Wxri7tODT-SDw%40mail.gmail.com
Backpatch-through: 15
Consistently prevent nbtree array advancement from treating a scankey as
required when operating in pstate.forcenonrequired mode. Otherwise, we
risk a NULL pointer dereference. This was possible in the path where
_bt_check_compare is called to recheck a tuple that advanced all of the
scan's arrays to matching values: its continuescan=false handling
expects _bt_advance_array_keys to have been called with a valid pstate,
but it'll always be NULL during sktrig_required=false calls (which is
how _bt_advance_array_keys must be called when pstate.forcenonrequired).
Oversight in commit 8a510275, which optimized nbtree search scan key
comparisons.
Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/CAHgHdKsn2W=gPBmj7p6MjQFvxB+zZDBkwTSg0o3f5Hh8rkRrsA@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzmodSE+gpTd1CRGU9ez8ytyyDS+Kns2r9NzgUp1s56kpw@mail.gmail.com
To insert the merged GIN entries in _gin_parallel_merge, the leader
calls ginEntryInsert(). This may allocate memory, e.g. for a new leaf
tuple. This was allocated in the PortalContext, and kept until the end
of the index build. For most GIN indexes the amount of leaked memory is
negligible, but for custom opclasses with large keys it may cause OOMs.
Fixed by calling ginEntryInsert() in a temporary memory context, reset
after each insert. Other ginEntryInsert() callers do this too, except
that the context is reset after batches of inserts. More frequent resets
don't seem to hurt performance, it may even help it a bit.
Report and fix by Vinod Sridharan.
Author: Vinod Sridharan <vsridh90@gmail.com>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAFMdLD4p0VBd8JG=Nbi=BKv6rzFAiGJ_sXSFrw-2tNmNZFO5Kg@mail.gmail.com
We only need a tuplestore if we're actually going to accumulate
multiple result tuples. Obviously then we don't need one for non-set-
returning functions; but even a SRF doesn't need one if we decide to
use "lazyEval" (one row at a time) mode. In these cases, it's
sufficient to use the junkfilter's result slot to hold the single row
that's due to be returned. We just need to "materialize" that slot
to ensure it holds onto the data past shutdown of the sub-executor.
The original intent of this patch was partially to save a few cycles
(by not putting tuples into a tuplestore only to pull them back out
immediately), but mostly to ensure that we don't use a tuplestore
in non-set-returning functions. That's because I had concerns
about whether a tuplestore is safe to keep across queries,
which was possible for functions invoked via long-lived FmgrInfos
such as those kept in the typcache. There are no cases where SRFs
are called that way, so getting rid of the tuplestore in non-SRFs
should make things safer.
However, it emerges that running fmgr_sql in a short-lived context
(as 595d1efed made it do) makes the existing coding unsafe anyway:
we can end up with a long-lived TupleTableSlot holding a freeable
reference to a short-lived tuple, resulting in a double-free crash.
Not trying to pull tuples out of the tuplestore using that slot
dodges the problem, so I'm going to commit this now rather than
invent a band-aid solution for v18.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2443532.1744919968@sss.pgh.pa.us
Discussion: https://postgr.es/m/9f975803-1a1c-4f21-b987-f572e110e860@gmail.com
For self-referencing foreign keys in partitioned tables, we weren't
handling creation of pg_constraint rows during CREATE TABLE PARTITION AS
as well as ALTER TABLE ATTACH PARTITION. This is an old bug -- mostly,
we broke this in 614a406b4ff1 while trying to fix it (so 12.13, 13.9,
14.6 and 15.0 and up all behave incorrectly). This commit reverts part
of that with additional fixes for full correctness, and installs more
tests to verify the parts we broke, not just the catalog contents but
also the user-visible behavior.
Backpatch to all live branches. In branches 13 and 14, commit
46a8c27a7226 changed the behavior during DETACH to drop a FK
constraint rather than trying to repair it, because the complete fix of
repairing catalog constraints was problematic due to lack of previous
fixes. For this reason, the test behavior in those branches is a bit
different. However, as best as I can tell, the fix works correctly
there.
In release notes we have to recommend that all self-referencing foreign
keys on partitioned tables be recreated if partitions have been created
or attached after the FK was created, keeping in mind that violating
rows might already be present on the referencing side.
Reported-by: Guillaume Lelarge <guillaume@lelarge.info>
Reported-by: Matthew Gabeler-Lee <fastcat@gmail.com>
Reported-by: Luca Vallisa <luca.vallisa@gmail.com>
Discussion: https://postgr.es/m/CAECtzeWHCA+6tTcm2Oh2+g7fURUJpLZb-=pRXgeWJ-Pi+VU=_w@mail.gmail.com
Discussion: https://postgr.es/m/18156-a44bc7096f0683e6@postgresql.org
Discussion: https://postgr.es/m/CAAT=myvsiF-Attja5DcWoUWh21R12R-sfXECY2-3ynt8kaOqjw@mail.gmail.com
libpq-oauth.a includes libpq-int.h, which includes OpenSSL headers. The
Autoconf side picks up the necessary include directories via CPPFLAGS,
but Meson needs the dependency to be made explicit.
Reported-by: Nathan Bossart <nathandbossart@gmail.com>
Tested-by: Nathan Bossart <nathandbossart@gmail.com>
Tested-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aBTgjDfrdOZmaPgv%40nathan
The extension_control_path setting (commit 4f7f7b03758) did not
support extensions that set a custom "directory" setting in their
control file. Very few extensions use that and during the discussion
on the previous commit it was suggested to maybe remove that
functionality. But a fix was easier than initially thought, so this
just adds that support. The fix is to use the control->control_dir as
a share dir to return the path of the extension script files.
To make this work more sensibly overall, the directory suffix
"extension" is no longer to be included in the extension_control_path
value. To quote the patch, it would be
-extension_control_path = '/usr/local/share/postgresql/extension:/home/my_project/share/extension:$system'
+extension_control_path = '/usr/local/share/postgresql:/home/my_project/share:$system'
During the initial patch, there was some discussion on which of these
two approaches would be better, and the committed patch was a 50/50
decision. But the support for the "directory" setting pushed it the
other way, and also it seems like many people didn't like the previous
behavior much.
Author: Matheus Alcantara <mths.dev@pm.me>
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: David E. Wheeler <david@justatheory.com>
Discussion: https://www.postgresql.org/message-id/flat/aAi1VACxhjMhjFnb%40msg.df7cb.de#0cdf7b7d727cc593b029650daa3c4fbc
Oversight in b0635bfda. -lintl is necessary for gettext on Mac, which
libpq-oauth depends on via pgport/pgcommon. (I'd incorrectly removed
this change from an earlier version of the patch, where it was suggested
by Peter Eisentraut.)
Per buildfarm member indri.
The additional packaging footprint of the OAuth Curl dependency, as well
as the existence of libcurl in the address space even if OAuth isn't
ever used by a client, has raised some concerns. Split off this
dependency into a separate loadable module called libpq-oauth.
When configured using --with-libcurl, libpq.so searches for this new
module via dlopen(). End users may choose not to install the libpq-oauth
module, in which case the default flow is disabled.
For static applications using libpq.a, the libpq-oauth staticlib is a
mandatory link-time dependency for --with-libcurl builds. libpq.pc has
been updated accordingly.
The default flow relies on some libpq internals. Some of these can be
safely duplicated (such as the SIGPIPE handlers), but others need to be
shared between libpq and libpq-oauth for thread-safety. To avoid
exporting these internals to all libpq clients forever, these
dependencies are instead injected from the libpq side via an
initialization function. This also lets libpq communicate the offsets of
PGconn struct members to libpq-oauth, so that we can function without
crashing if the module on the search path came from a different build of
Postgres. (A minor-version upgrade could swap the libpq-oauth module out
from under a long-running libpq client before it does its first load of
the OAuth flow.)
This ABI is considered "private". The module has no SONAME or version
symlinks, and it's named libpq-oauth-<major>.so to avoid mixing and
matching across Postgres versions. (Future improvements may promote this
"OAuth flow plugin" to a first-class concept, at which point we would
need a public API to replace this anyway.)
Additionally, NLS support for error messages in b3f0be788a was
incomplete, because the new error macros weren't being scanned by
xgettext. Fix that now.
Per request from Tom Lane and Bruce Momjian. Based on an initial patch
by Daniel Gustafsson, who also contributed docs changes. The "bare"
dlopen() concept came from Thomas Munro. Many people reviewed the design
and implementation; thank you!
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Wolfgang Walther <walther@technowledgy.de>
Discussion: https://postgr.es/m/641687.1742360249%40sss.pgh.pa.us
Make an nbtree array preprocessing assertion account for scans that add
fewer skip arrays than initially expected due to preprocessing finding
an unsatisfiable array qual.
Oversight in commit 92fe23d9.
Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/CAHgHdKtQMhHy5qcB3KqCcGiW-Rp8P7KzUFRa9ZMKUiv6zen7LQ@mail.gmail.com
We try to avoid using strncpy() due to the ease of which it can
be misused. Convert this callsite to use strlcpy() instead to
match similar codepaths in this file.
Suggested-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/2a796830-de2d-4030-b480-d673f6cc5d94@eisentraut.org
Presently, --missing-stats-only skips relations with reltuples set
to 0 because empty relations don't get optimizer statistics.
However, before v14, a reltuples value of 0 was ambiguous: it could
either mean the relation is empty, or it could mean that it hadn't
yet been vacuumed or analyzed. (Commit 3d351d916b taught v14 and
newer to use -1 for the latter case.) This ambiguity can cause
--missing-stats-only to inadvertently skip relations that need
optimizer statistics after upgrades to v18 and newer (since
reltuples is now transferred from the old cluster).
To fix, simply remove the check for reltuples != 0. This will
cause --missing-stats-only to analyze some empty tables, but that
doesn't seem too terrible a trade-off.
Reported-by: Christoph Berg <myon@debian.org>
Reviewed-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/aAjyvW5_fRGNr7yF%40msg.df7cb.de
Since pg_upgrade does not transfer the cumulative statistics used
to trigger autovacuum and autoanalyze, the server may take much
longer than expected to process them post-upgrade. Currently, we
recommend analyzing only relations for which optimizer statistics
were not transferred by using the --analyze-in-stages and
--missing-stats-only options. This commit appends another
recommendation to analyze all relations to update the relevant
cumulative statistics by using the --analyze-only option. This is
similar to the recommendation for pg_stat_reset().
Reported-by: Christoph Berg <myon@debian.org>
Reviewed-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/aAfxfKC82B9NvJDj%40msg.df7cb.de
DST law changes in Chile: there is a new time zone America/Coyhaique
for Chile's Aysén Region, to account for it changing to UTC-03
year-round and thus diverging from America/Santiago.
Historical corrections for Iran.
Backpatch-through: 13
When copying the string strncpy won't add nul termination since
the string length is equal to the length specified. Explicitly
set a nul terminator after copying to properly terminate. Found
via post-commit review.
Author: Rahila Syed <rahilasyed90@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAH2L28vt16C9xTuK+K7QZvtA3kCNWXOEiT=gEekUw3Xxp9LVQw@mail.gmail.com
This reverts commit 38da053463bef32adf563ddee5277d16d2b6c5af, which
attempted to preserve our ability to start with only 60 semaphores.
Subsequent changes (particularly 55b454d0e) have put that idea pretty
much permanently out of reach: people wishing to use Postgres v18 on
OpenBSD or NetBSD will have no choice but to increase those platforms'
default values of SEMMNI and SEMMNS.
Hence, revert 38da05346's changes in SEMAS_PER_SET and the minimum
tested value of max_connections. Adjust a comment from the subsequent
patch 6d0154196, and tweak the wording in runtime.sgml to make it
clear that changing SEMMNI/SEMMNS is no longer even a little bit
optional on these platforms.
Although 38da05346 was later back-patched into v17, leave that branch
alone: it's still capable of starting with 60 semaphores, and there's
no reason to break that.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/E1tuZNv-0037Gs-34@gemulon.postgresql.org
Discussion: https://postgr.es/m/1052019.1745947915@sss.pgh.pa.us
Tell UIs to hide the value of oauth_client_secret, like the other
passwords. Due to the previous commit, this does not affect postgres_fdw
and dblink, but add a comment to try to warn others of the hazard in the
future.
Reported-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20250415191435.55.nmisch%40google.com
Commit 6d01541960 taught initdb to lower the default value of
autovacuum_worker_slots for systems with very few semaphores. It
also added a "fake" report for the chosen value, i.e., initdb
prints a message about selecting the default, but the value was
already selected in a previous test. Per discussion, this is not a
precedent we want to set, and it seems unnecessary to report
everything derived from max_connections, so let's remove the "fake"
report.
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Suggested-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/de722583-4ba4-4063-bc41-e20684978116%40eisentraut.org
This commit fixes two bug in ChangeVarNodes_walker() function.
* When considering RestrictInfo, walk down to its clauses based on the
presense of relid to be deleted not just in clause_relids but also in
required_relids.
* Incrementally adjust num_base_rels based on the change of clause_relids
instead of recalculating it using clause_relids, which could contain
outer-join relids.
Reported-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://postgr.es/m/CAMbWs49PE3CvnV8vrQ0Dr%3DHqgZZmX0tdNbzVNJxqc8yg-8kDQQ%40mail.gmail.com
Author: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
The --help synopsis should only be one line. This rephrases the first
line a bit to reflect the new functionality of restoring multiple
databases from pg_dumpall output. Additional explanations are better
kept in the man page.
The slot synchronization skips updating the confirmed_flush LSN of the
local slot if the local slot has a newer catalog_xmin or restart_lsn, but
still allows updating the two_phase and two_phase_at fields of the slot.
This opens up a window for the prepared transactions between old
confirmed_flush LSN and two_phase_at to unexpectedly get decoded and sent
to the downstream after promotion. Then, while decoding the commit
prepared the assert will fail, which expects that the prepare hasn't been
sent to the downstream.
The fix is to skip updating the other slot fields when we are skipping to
update the confirmed_flush LSN of the slot.
We didn't backpatch this commit as two_phase_at was not synced in back
branches, which means prepared transactions won't be unexpectedly sent to
downstream.
We discovered this problem while analyzing BF failure reported in the
discussion link.
Reliably reproducing this issue without a debugger is difficult. Given
its rarity, adding specific injection point to test it doesn't seem
worthwhile, so we won't be adding a dedicated test case.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/OS0PR01MB5716B44052000EB91EFAE60E94BC2@OS0PR01MB5716.jpnprd01.prod.outlook.com
Before commit a0ed19e0a9e there was a cast around these, but the cast
inadvertently changed the signedness, but that made the format
placeholder correct. Commit a0ed19e0a9e removed the casts, so now the
format placeholders had the wrong signedness.
Index vacuuming and [auto]prewarm AIO concurrency should be governed by
maintenance_io_concurrency. As such, pass those read stream users the
READ_STREAM_MAINTENANCE flag which will calculate their read stream
distance with maintenance_io_concurrency instead of
effective_io_concurrency. This was an oversight in the original commits
making those operations use the read stream API.
Discussion: https://postgr.es/m/flat/CAAKRu_aopDxTo4b41Mt_7Zc-z0_ngocrY8SFCCY6Aph1HgwuNw%40mail.gmail.com
Checking if another primitive scan is required after all once the next
leaf page was moved from _bt_checkkeys to its _bt_readpage caller by
commit 9a2e2a28. Update a comment that incorrectly described the
recheck mechanism as something that takes place in _bt_checkkeys.
Also fix an older typo in related code comments.
_bt_check_compare neglected to handle a case that can arise when the
scan's keys are temporarily treated as nonrequired, as an optimization:
whenever a NULL tuple value was encountered that had a skip array whose
current element wasn't already NULL, _bt_check_compare failed to advance
the array to the NULL element. This allowed _bt_check_compare to fail
to return matching tuples containing a NULL value (though only with an
array column that came before a skip array column with NULLs, and only
during _bt_readpage calls that set pstate.forcenonrequired=true on a
page where the higher-order column also had to advance).
To fix, teach _bt_check_compare to handle this case just like any other
case where a skip array key is unsatisfied and must be advanced directly
(due to the key being considered a nonrequired key).
Oversight in commit 8a510275, which optimized nbtree search scan key
comparisons with skip arrays.
Author: Peter Geoghegan <pg@bowt.ie>
Reported-By: Mark Dilger <mark.dilger@enterprisedb.com>
Discussion: https://postgr.es/m/CAHgHdKtLFWZcjr87hMH0hYDHgcifu4Tj7iHz-xh8qsJREt5cqA@mail.gmail.com
When a child constraint is validated and the parent constraint it
derives from isn't, pg_dump must be coerced into printing the child
constraint; failing to do would result in a dump that restores the
constraint as not valid, which would be incorrect.
Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Reported-by: jian he <jian.universality@gmail.com>
Message-id: https://postgr.es/m/CACJufxGHNNMc0E2JphUqJMzD3=bwRSuAEVBF5ekgkG8uY0Q3hg@mail.gmail.com
During logical decoding, we advance catalog_xmin of logical too early in
fast_forward mode, resulting in required catalog data being removed by
vacuum. This mode is normally used to advance the slot without processing
the changes, but we still can't let the slot's xmin to advance to an
incorrect value.
Commit f49a80c481 fixed a similar issue where the logical slot's
catalog_xmin was getting advanced prematurely during non-fast-forward
mode. During xl_running_xacts processing, instead of directly advancing
the slot's xmin to the oldest running xid in the record, it allowed the
xmin to be held back for snapshots that can be used for
not-yet-replayed transactions, as those might consider older txns as
running too. However, it missed the fact that the same problem can happen
during fast_forward mode decoding, as we won't build a base snapshot in
that mode, and the future call to get_changes from the same slot can miss
seeing the required catalog changes leading to incorrect reslts.
This commit allows building the base snapshot even in fast_forward mode to
prevent the early advancement of xmin.
Reported-by: Amit Kapila <amit.kapila16@gmail.com>
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 13
Discussion: https://postgr.es/m/CAA4eK1LqWncUOqKijiafe+Ypt1gQAQRjctKLMY953J79xDBgAg@mail.gmail.com
Discussion: https://postgr.es/m/OS0PR01MB57163087F86621D44D9A72BF94BB2@OS0PR01MB5716.jpnprd01.prod.outlook.com
wait_event_types.h is generated by the code, and included wait_event.h.
wait_event.h did the opposite move, including wait_event_types.h,
causing a circular dependency between both.
wait_event_types.h only needs to now about the wait event classes, so
this information is moved into its own file, and wait_event_types.h uses
this new header so as it does not depend anymore on wait_event.h.
Note that such errors can be found with clang-tidy, with commands like
this one:
clang-tidy source_file.c --checks=misc-header-include-cycle -- \
-I/install/path/include/ -I/install/path/include/server/
Issue introduced by fa88928470b5.
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/350192.1745768770@sss.pgh.pa.us
fc069a3a6319 implements Self-Join Elimination (SJE), which can remove base
relations when appropriate. However, regressions tests for SJE only cover
the case when placeholder variables (PHVs) are evaluated and needed only
in a single base rel. If this baserel is removed due to SJE, its clauses,
including PHVs, will be transferred to the keeping relation. Removing these
PHVs may trigger an error on plan creation -- thanks to the b3ff6c742f6c for
detecting that.
This commit skips removal of PHVs during SJE. This might also happen that
we skip the removal of some PHVs that could be removed. However, the overhead
of extra PHVs is small compared to the complexity of analysis needed to remove
them.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Alena Rybakina <a.rybakina@postgrespro.ru>
Author: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Per our usual policy, Postgres header files should not include these;
the decision as to which one to use is to be made in the calling .c
file instead.
These errors aren't particularly new, but I'm not feeling a need
to back-patch these changes; it's mostly just neatnik-ism.
This cleans up some loose ends left by commit e8ca9ed1d. I hadn't
looked closely enough at these places before, but now I have.
The use of double-quoted #includes for Perl headers in plperl_system.h
seems to be simply a mistake introduced in 6c944bf3c and faithfully
copied forward since then. (I had thought possibly it was required
by some weird Windows build setup, but there's no evidence of that in
our history.)
The occurrences in SectionMemoryManager.h and SectionMemoryManager.cpp
evidently stem from those files' origin as LLVM code. It's
understandable that LLVM would treat their own files as needing
double-quoted #includes; but they're still system headers to us.
I also applied the same check to *.c files, and found a few other
random incorrect usages in both directions.
Our ECPG headers and test files routinely use angle brackets to refer
to ECPG headers. I left those usages alone, since it seems reasonable
for an ECPG user to regard those headers as system headers.
plpython.h included plpy_util.h, simply on the grounds that "it's
easier to just include it everywhere". However, plpy_util.h must
include plpython.h, or it won't pass headerscheck. While the
resulting circularity doesn't have any immediate bad effect,
it's poor design. We have seen serious messes arise in the past
from overly-broad inclusion footprints created by such circularities,
so let's establish a project policy against it.
To fix, just replace *.c files' inclusions of plpython.h with
plpy_util.h. They'll pull in plpython.h indirectly; indeed, almost
all have already done so via inclusions of other plpy_xxx.h headers.
(Any extensions using plpython.h can do likewise without breaking
the compatibility of their code with prior Postgres versions.)
Reported-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aAxQ6fcY5QQV1lo3@ip-10-97-1-34.eu-west-3.compute.internal
While few if any C compilers will complain about this, it's
inconsistent with our other #include's of the same headers.
There are some other questionable usages in
src/include/jit/SectionMemoryManager.h and
src/pl/plperl/plperl_system.h, but perhaps those have a
reason to be like that. I can't see that these do.
Noticed while fooling around with a script to do analysis
of our header cross-inclusions.
c4d5cb71d2 adjusted the fast-path locking code to allow some
configuration of the number of fast-path locking slots via the
max_locks_per_transaction GUC. In that commit the FAST_PATH_REL_GROUP()
macro used integer division to determine the fast-path locking group slot
to use for the lock.
The divisor in this case is always a power-of-two value. Here we swap
out the divide by a bitwise-AND, which is a significantly faster
operation to perform.
In passing, adjust the code that's setting FastPathLockGroupsPerBackend
so that it's more clear that the value being set is a power-of-two.
Also, adjust some comments in the area which contained some magic
numbers. It seems better to justify the 1024 upper limit in the
location where the #define is made instead of where it is used.
Author: David Rowley <drowleyml@gmail.com>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAApHDvodr3bcnpxcs7+k-3cFwYR0tP-BYhyd2PpDhe-bCx9i=g@mail.gmail.com