Compare commits

..

152 Commits

Author SHA1 Message Date
Tom Lane
ab844ce378 Stamp 18beta3. 2025-08-11 17:02:23 -04:00
Nathan Bossart
67a2fbb8f9 Restrict psql meta-commands in plain-text dumps.
A malicious server could inject psql meta-commands into plain-text
dump output (i.e., scripts created with pg_dump --format=plain,
pg_dumpall, or pg_restore --file) that are run at restore time on
the machine running psql.  To fix, introduce a new "restricted"
mode in psql that blocks all meta-commands (except for \unrestrict
to exit the mode), and teach pg_dump, pg_dumpall, and pg_restore to
use this mode in plain-text dumps.

While at it, encourage users to only restore dumps generated from
trusted servers or to inspect it beforehand, since restoring causes
the destination to execute arbitrary code of the source superusers'
choice.  However, the client running the dump and restore needn't
trust the source or destination superusers.

Reported-by: Martin Rakhmanov
Reported-by: Matthieu Denais <litezeraw@gmail.com>
Reported-by: RyotaK <ryotak.mail@gmail.com>
Suggested-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Security: CVE-2025-8714
Backpatch-through: 13
2025-08-11 09:00:00 -05:00
Noah Misch
13a67ce603 Convert newlines to spaces in names written in v11+ pg_dump comments.
Maliciously-crafted object names could achieve SQL injection during
restore.  CVE-2012-0868 fixed this class of problem at the time, but
later work reintroduced three cases.  Commit
bc8cd50fefd369b217f80078585c486505aafb62 (back-patched to v11+ in
2023-05 releases) introduced the pg_dump case.  Commit
6cbdbd9e8d8f2986fde44f2431ed8d0c8fce7f5d (v12+) introduced the two
pg_dumpall cases.  Move sanitize_line(), unchanged, to dumputils.c so
pg_dumpall has access to it in all supported versions.  Back-patch to
v13 (all supported versions).

Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Backpatch-through: 13
Security: CVE-2025-8715
2025-08-11 06:19:03 -07:00
Peter Eisentraut
605fdb989b Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 380d4ff4d883aef5cb4e5ced45a339771197e6ef
2025-08-11 14:38:06 +02:00
Dean Rasheed
64f77c6a65 Fix security checks in selectivity estimation functions.
Commit e2d4ef8de86 (the fix for CVE-2017-7484) added security checks
to the selectivity estimation functions to prevent them from running
user-supplied operators on data obtained from pg_statistic if the user
lacks privileges to select from the underlying table. In cases
involving inheritance/partitioning, those checks were originally
performed against the child RTE (which for plain inheritance might
actually refer to the parent table). Commit 553d2ec2710 then extended
that to also check the parent RTE, allowing access if the user had
permissions on either the parent or the child. It turns out, however,
that doing any checks using the child RTE is incorrect, since
securityQuals is set to NULL when creating an RTE for an inheritance
child (whether it refers to the parent table or the child table), and
therefore such checks do not correctly account for any RLS policies or
security barrier views. Therefore, do the security checks using only
the parent RTE. This is consistent with how RLS policies are applied,
and the executor's ACL checks, both of which use only the parent
table's permissions/policies. Similar checks are performed in the
extended stats code, so update that in the same way, centralizing all
the checks in a new function.

In addition, note that these checks by themselves are insufficient to
ensure that the user has access to the table's data because, in a
query that goes via a view, they only check that the view owner has
permissions on the underlying table, not that the current user has
permissions on the view itself. In the selectivity estimation
functions, there is no easy way to navigate from underlying tables to
views, so add permissions checks for all views mentioned in the query
to the planner startup code. If the user lacks permissions on a view,
a permissions error will now be reported at planner-startup, and the
selectivity estimation functions will not be run.

Checking view permissions at planner-startup in this way is a little
ugly, since the same checks will be repeated at executor-startup.
Longer-term, it might be better to move all the permissions checks
from the executor to the planner so that permissions errors can be
reported sooner, instead of creating a plan that won't ever be run.
However, such a change seems too far-reaching to be back-patched.

Back-patch to all supported versions. In v13, there is the added
complication that UPDATEs and DELETEs on inherited target tables are
planned using inheritance_planner(), which plans each inheritance
child table separately, so that the selectivity estimation functions
do not know that they are dealing with a child table accessed via its
parent. Handle that by checking access permissions on the top parent
table at planner-startup, in the same way as we do for views. Any
securityQuals on the top parent table are moved down to the child
tables by inheritance_planner(), so they continue to be checked by the
selectivity estimation functions.

Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Noah Misch <noah@leadboat.com>
Backpatch-through: 13
Security: CVE-2025-8713
2025-08-11 09:07:36 +01:00
Noah Misch
0d2734eac3 Remove, from stable branches, the new assertion of no pg_dump OID sort.
Commit 0decd5e89db9f5edb9b27351082f0d74aae7a9b6 recently added the
assertion to confirm dump order remains independent of OID values.  The
assertion remained reachable via DO_DEFAULT_ACL.  Given the release wrap
tomorrow, make the assertion master-only.

Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/d32aaa8d-df7c-4f94-bcb3-4c85f02bea21@gmail.com
Backpatch-through: 13-18
2025-08-10 13:05:13 -07:00
Thomas Munro
9110d81641 Fix rare bug in read_stream.c's split IO handling.
The internal queue of buffers could become corrupted in a rare edge case
that failed to invalidate an entry, causing a stale buffer to be
"forwarded" to StartReadBuffers().  This is a simple fix for the
immediate problem.

A small API change might be able to remove this and related fragility
entirely, but that will have to wait a bit.

Defect in commit ed0b87ca.

Bug: 19006
Backpatch-through: 18
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>
Discussion: https://postgr.es/m/19006-80fcaaf69000377e%40postgresql.org
2025-08-09 13:06:46 +12:00
Peter Eisentraut
762fed90bf postgres_fdw and dblink should check if backend has MyProcPort
before checking ->has_scram_keys.  MyProcPort is NULL in background
workers.  So this could crash for example if a background worker
accessed a suitable configured foreign table.

Author: Alexander Pyhalov <a.pyhalov@postgrespro.ru>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/27b29a35-9b96-46a9-bc1a-914140869dac%40gmail.com
2025-08-08 19:51:57 +02:00
Jacob Champion
41aac1483a oauth: Track total call count during a client flow
Tracking down the bugs that led to the addition of comb_multiplexer()
and drain_timer_events() was difficult, because an inefficient flow is
not visibly different from one that is working properly. To help
maintainers notice when something has gone wrong, track the number of
calls into the flow as part of debug mode, and print the total when the
flow finishes.

A new test makes sure the total count is less than 100. (We expect
something on the order of 10.) This isn't foolproof, but it is able to
catch several regressions in the logic of the prior two commits, and
future work to add TLS support to the oauth_validator test server should
strengthen it as well.

Backpatch-through: 18
Discussion: https://postgr.es/m/CAOYmi+nDZxJHaWj9_jRSyf8uMToCADAmOfJEggsKW-kY7aUwHA@mail.gmail.com
2025-08-08 08:48:23 -07:00
Jacob Champion
e507e08acf oauth: Remove expired timers from the multiplexer
In a case similar to the previous commit, an expired timer can remain
permanently readable if Curl does not remove the timeout itself. Since
that removal isn't guaranteed to happen in real-world situations,
implement drain_timer_events() to reset the timer before calling into
drive_request().

Moving to drain_timer_events() happens to fix a logic bug in the
previous caller of timer_expired(), which treated an error condition as
if the timer were expired instead of bailing out.

The previous implementation of timer_expired() gave differing results
for epoll and kqueue if the timer was reset. (For epoll, a reset timer
was considered to be expired, and for kqueue it was not.) This didn't
previously cause problems, since timer_expired() was only called while
the timer was known to be set, but both implementations now use the
kqueue logic.

Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CAOYmi+nDZxJHaWj9_jRSyf8uMToCADAmOfJEggsKW-kY7aUwHA@mail.gmail.com
2025-08-08 08:48:23 -07:00
Jacob Champion
16b0c48583 oauth: Ensure unused socket registrations are removed
If Curl needs to switch the direction of a socket's registration (e.g.
from CURL_POLL_IN to CURL_POLL_OUT), it expects the old registration to
be discarded. For epoll, this happened via EPOLL_CTL_MOD, but for
kqueue, the old registration would remain if it was not explicitly
removed by Curl.

Explicitly remove the opposite-direction event during registrations. (If
that event doesn't exist, we'll just get an ENOENT, which will be
ignored by the same code that handles CURL_POLL_REMOVE.) A few
assertions are also added to strengthen the relationship between the
number of events added, the number of events pulled off the queue, and
the lengths of the kevent arrays.

Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CAOYmi+nDZxJHaWj9_jRSyf8uMToCADAmOfJEggsKW-kY7aUwHA@mail.gmail.com
2025-08-08 08:48:23 -07:00
Jacob Champion
ff181d1f87 oauth: Remove stale events from the kqueue multiplexer
If a socket is added to the kqueue, becomes readable/writable, and
subsequently becomes non-readable/writable again, the kqueue itself will
remain readable until either the socket registration is removed, or the
stale event is cleared via a call to kevent().

In many simple cases, Curl itself will remove the socket registration
quickly, but in real-world usage, this is not guaranteed to happen. The
kqueue can then remain stuck in a permanently readable state until the
request ends, which results in pointless wakeups for the client and
wasted CPU time.

Implement comb_multiplexer() to call kevent() and unstick any stale
events that would cause unnecessary callbacks. This is called right
after drive_request(), before we return control to the client to wait.

Suggested-by: Thomas Munro <thomas.munro@gmail.com>
Co-authored-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Backpatch-through: 18
Discussion: https://postgr.es/m/CAOYmi+nDZxJHaWj9_jRSyf8uMToCADAmOfJEggsKW-kY7aUwHA@mail.gmail.com
2025-08-08 08:48:23 -07:00
Thomas Munro
4cd9d5fc15 Remove obsolete comment.
Remove a comment about potential for AIO in StartReadBuffersImpl(),
because that change happened.
2025-08-09 01:46:19 +12:00
Peter Eisentraut
992a18f516 Fix incorrect lack of Datum conversion in _int_matchsel()
The code used

    return (Selectivity) 0.0;

where

    PG_RETURN_FLOAT8(0.0);

would be correct.

On 64-bit systems, these are pretty much equivalent, but on 32-bit
systems, PG_RETURN_FLOAT8() correctly produces a pointer, but the old
wrong code would return a null pointer, possibly leading to a crash
elsewhere.

We think this code is actually not reachable because bqarr_in won't
accept an empty query, and there is no other function that will
create query_int values.  But better be safe and not let such
incorrect code lie around.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/8246d7ff-f4b7-4363-913e-827dadfeb145%40eisentraut.org
2025-08-08 12:10:37 +02:00
Etsuro Fujita
bba6a6fafc Fix oversight in FindTriggerIncompatibleWithInheritance.
This function is called from ATExecAttachPartition/ATExecAddInherit,
which prevent tables with row-level triggers with transition tables from
becoming partitions or inheritance children, to check if there is such a
trigger on the given table, but failed to check if a found trigger is
row-level, causing the caller functions to needlessly prevent a table
with only a statement-level trigger with transition tables from becoming
a partition or inheritance child.  Repair.

Oversight in commit 501ed02cf.

Author: Etsuro Fujita <etsuro.fujita@gmail.com>
Discussion: https://postgr.es/m/CAPmGK167mXzwzzmJ_0YZ3EZrbwiCxtM1vogH_8drqsE6PtxRYw%40mail.gmail.com
Backpatch-through: 13
2025-08-08 17:35:00 +09:00
Fujii Masao
e3764229e6 pg_dump: Fix incorrect parsing of object types in pg_dump --filter.
Previously, pg_dump --filter could misinterpret invalid object types
in the filter file as valid ones. For example, the invalid object type
"table-data" (likely a typo for the valid "table_data") could be
mistakenly recognized as "table", causing pg_dump to succeed
when it should have failed.

This happened because pg_dump identified keywords as sequences of
ASCII alphabetic characters, treating non-alphabetic characters
(like hyphens) as keyword boundaries. As a result, "table-data" was
parsed as "table".

To fix this, pg_dump --filter now treats keywords as strings of
non-whitespace characters, ensuring invalid types like "table-data"
are correctly rejected.

Back-patch to v17, where the --filter option was introduced.

Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Xuneng Zhou <xunengzhou@gmail.com>
Reviewed-by: Srinath Reddy <srinath2133@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CAHGQGwFzPKUwiV5C-NLBqz1oK1+z9K8cgrF+LcxFem-p3_Ftug@mail.gmail.com
Backpatch-through: 17
2025-08-08 14:37:32 +09:00
Etsuro Fujita
ce88170227 Disallow collecting transition tuples from child foreign tables.
Commit 9e6104c66 disallowed transition tables on foreign tables, but
failed to account for cases where a foreign table is a child table of a
partitioned/inherited table on which transition tables exist, leading to
incorrect transition tuples collected from such foreign tables for
queries on the parent table triggering transition capture.  This
occurred not only for inherited UPDATE/DELETE but for partitioned INSERT
later supported by commit 3d956d956, which should have handled it at
least for the INSERT case, but didn't.

To fix, modify ExecAR*Triggers to throw an error if the given relation
is a foreign table requesting transition capture.  Also, this commit
fixes make_modifytable so that in case of an inherited UPDATE/DELETE
triggering transition capture, FDWs choose normal operations to modify
child foreign tables, not DirectModify; which is needed because they
would otherwise skip the calls to ExecAR*Triggers at execution, causing
unexpected behavior.

Author: Etsuro Fujita <etsuro.fujita@gmail.com>
Reviewed-by: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CAPmGK14QJYikKzBDCe3jMbpGENnQ7popFmbEgm-XTNuk55oyHg%40mail.gmail.com
Backpatch-through: 13
2025-08-08 10:50:01 +09:00
Michael Paquier
ab74ce4dc9 Add information about "generation" when dropping twice pgstats entry
Dropping twice a pgstats entry should not happen, and the error report
generated was missing the "generation" counter (tracking when an entry
is reused) that has been added in 818119afccd3.

Like d92573adcb02, backpatch down to v15 where this information is
useful to have, to gather more information from instances where the
problem shows up.  A report has shown that this error path has been
reached on a standby based on 17.3, for a relation stats entry and an
OID close to wraparound.

Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/CAN4RuQvYth942J2+FcLmJKgdpq6fE5eqyFvb_PuskxF2eL=Wzg@mail.gmail.com
Backpatch-through: 15
2025-08-08 09:07:49 +09:00
Jacob Champion
a9ffb35274 meson: Fix install-quiet after clean
libpq-oauth was missing from the installed_targets list, so

    $ ninja clean && ninja install-quiet

failed with the error message

    ERROR: File 'src/interfaces/libpq-oauth/libpq-oauth.a' could not be found

It seems a little odd to have to tell Meson what's missing, since it
clearly knows how to build that file during regular installation. But
the "quiet" variant we've created must use --no-rebuild, to avoid
spawning concurrent ninja processes that would step on each other.

Reported-by: Andres Freund <andres@anarazel.de>
Backpatch-through: 18
Discussion: https://postgr.es/m/hbpqdwxkfnqijaxzgdpvdtp57s7gwxa5d6sbxswovjrournlk6%404jnb2gzan4em
2025-08-07 15:31:36 -07:00
Tom Lane
31c09ef456 doc: add float as an alias for double precision.
Although the "Floating-Point Types" section says that "float" data
type is taken to mean "double precision", this information was not
reflected in the data type table that lists all data type aliases.

Reported-by: alexander.kjall@hafslund.no
Author: Euler Taveira <euler@eulerto.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/175456294638.800.12038559679827947313@wrigleys.postgresql.org
Backpatch-through: 13
2025-08-07 18:04:55 -04:00
Peter Eisentraut
f15c00e909 doc: Formatting improvements
Small touch-up on commits 25505082f0e and 50fd428b2b9.  Fix the
formatting of the example messages in the documentation and adjust the
wording to match the code.
2025-08-07 14:07:19 +02:00
Alexander Korotkov
5cfbff48a4 Fix checkpointer shared memory allocation
Use Min(NBuffers, MAX_CHECKPOINT_REQUESTS) instead of NBuffers in
CheckpointerShmemSize() to match the actual array size limit set in
CheckpointerShmemInit().  This prevents wasting shared memory when
NBuffers > MAX_CHECKPOINT_REQUESTS.  Also, fix the comment.

Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1439188.1754506714%40sss.pgh.pa.us
Author: Xuneng Zhou <xunengzhou@gmail.com>
Co-authored-by: Alexander Korotkov <aekorotkov@gmail.com>
2025-08-07 14:31:18 +03:00
Alexander Korotkov
2ae8280d16 Revert "Clarify documentation for the initcap function"
This reverts commit 1fe9e3822c4e574aa526b99af723e61e03f36d4f.  That commit
was a documentation improvement, not a bug fix.  We don't normally backpatch
such changes.

Discussion: https://postgr.es/m/d8eacbeb8194c578a98317b86d7eb2ef0b6eb0e0.camel%40j-davis.com
2025-08-07 14:11:49 +03:00
John Naylor
dd29262077 Update ICU C++ API symbols
Recent ICU versions have added U_SHOW_CPLUSPLUS_HEADER_API, and we need
to set this to zero as well to hide the ICU C++ APIs from pg_locale.h

Per discussion, we want cpluspluscheck to work cleanly in backbranches,
so backpatch both this and its predecessor commit ed26c4e25a4 to all
supported versions.

Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1115793.1754414782%40sss.pgh.pa.us
Backpatch-through: 13
2025-08-07 17:12:44 +07:00
Peter Eisentraut
1084e76f3c pg_upgrade: Improve message indentation
Fix commit f295494d338 to use consistent four-space indentation for
verbose messages.
2025-08-07 11:59:14 +02:00
Michael Paquier
67ffe1987d Improve tests of date_trunc() with infinity and unsupported units
Commit d85ce012f99f has added some new error handling code to
date_trunc() of timestamp, timestamptz, and interval with infinite
values.

However, the new test cases added by that commit did not actually test
all of the new code, missing coverage for the following cases:
1) For timestamp without time zone:
1-1) infinite value with valid unit
1-2) infinite value with unsupported unit
1-3) finite value with unsupported unit, for a code path older than
d85ce012f99f.
2) For timestamp with time zone, without a time zone specified for the
truncation:
2-1) infinite value with valid unit
2-2) infinite value with unsupported unit
2-3) finite value with unsupported unit, for a code path older than
d85ce012f99f.
3) For timestamp with time zone, with a time zone specified for the
truncation:
3-1) infinite value with valid unit.
3-2) infinite value with unsupported unit.

This commit also provides coverage for the bug fixed in 2242b26ce472,
through cases 2-1) and 3-1), when using an infinite value with a valid
unit, with[out] the optional time zone parameter used for the
truncation.

Author: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/2d320b6f-b4af-4fbc-9eec-5d0fa15d187b@eisentraut.org
Discussion: https://postgr.es/m/4bf60a84-2862-4a53-acd5-8eddf134a60e@eisentraut.org
Backpatch-through: 18
2025-08-07 11:49:29 +09:00
Michael Paquier
074db8604a Fix incorrect Datum conversion in timestamptz_trunc_internal()
The code used a PG_RETURN_TIMESTAMPTZ() where the return type is
TimestampTz and not a Datum.

On 64-bit systems, there is no effect since this just ends up casting
64-bit integers back and forth.  On 32-bit systems, timestamptz is
pass-by-reference.  PG_RETURN_TIMESTAMPTZ() allocates new memory and
returns the address, meaning that the caller could interpret this as a
timestamp value.

The effect is using "date_trunc(..., 'infinity'::timestamptz) will
return random values (instead of the correct return value 'infinity').

Bug introduced in commit d85ce012f99f.

Author: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2d320b6f-b4af-4fbc-9eec-5d0fa15d187b@eisentraut.org
Discussion: https://postgr.es/m/4bf60a84-2862-4a53-acd5-8eddf134a60e@eisentraut.org
Backpatch-through: 18
2025-08-07 11:02:09 +09:00
Peter Eisentraut
ce13bb96fb Remove INT64_HEX_FORMAT and UINT64_HEX_FORMAT
These were introduced (commit efdc7d74753) at the same time as we were
moving to using the standard inttypes.h format macros (commit
a0ed19e0a9e).  It doesn't seem useful to keep a new already-deprecated
interface like this with only a few users, so remove the new symbols
again and have the callers use PRIx64.

(Also, INT64_HEX_FORMAT was kind of a misnomer, since hex formats all
use unsigned types.)

Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/0ac47b5d-e5ab-4cac-98a7-bdee0e2831e4%40eisentraut.org
2025-08-06 10:58:06 +02:00
Fujii Masao
3e65e77f76 doc: Recommend ANALYZE after ALTER TABLE ... SET EXPRESSION AS.
ALTER TABLE ... SET EXPRESSION AS removes statistics for the target column,
so running ANALYZE afterward is recommended. But this was previously not
documented, even though a similar recommendation exists for
ALTER TABLE ... SET DATA TYPE, which also clears the column's statistics.
This commit updates the documentation to include the ANALYZE recommendation
for SET EXPRESSION AS.

Since v18, virtual generated columns are supported, and these columns never
have statistics. Therefore, ANALYZE is not needed after SET DATA TYPE or
SET EXPRESSION AS when used on virtual generated columns. This commit also
updates the documentation to clarify that ANALYZE is unnecessary in such cases.

Back-patch the ANALYZE recommendation for SET EXPRESSION AS to v17
where the feature was introduced, and the note about virtual generated
columns to v18 where those columns were added.

Author: Yugo Nagata <nagata@sraoss.co.jp>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/20250804151418.0cf365bd2855d606763443fe@sraoss.co.jp
Backpatch-through: 17
2025-08-06 16:48:22 +09:00
Tom Lane
9b681e2397 Fix incorrect return value in brin_minmax_multi_distance_numeric().
The result of "DirectFunctionCall1(numeric_float8, d)" is already in
Datum form, but the code was incorrectly applying PG_RETURN_FLOAT8()
to it.  On machines where float8 is pass-by-reference, this would
result in complete garbage, since an unpredictable pointer value
would be treated as an integer and then converted to float.  It's not
entirely clear how much of a problem would ensue on 64-bit hardware,
but certainly interpreting a float8 bitpattern as uint64 and then
converting that to float isn't the intended behavior.

As luck would have it, even the complete-garbage case doesn't break
BRIN indexes, since the results are only used to make choices about
how to merge values into ranges: at worst, we'd make poor choices
resulting in an inefficient index.  Doubtless that explains the lack
of field complaints.  However, users with BRIN indexes that use the
numeric_minmax_multi_ops opclass may wish to reindex in hopes of
making their indexes more efficient.

Author: Peter Eisentraut <peter@eisentraut.org>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/2093712.1753983215@sss.pgh.pa.us
Backpatch-through: 14
2025-08-05 16:51:10 -04:00
Álvaro Herrera
f71ad5b082
Put PG_TEST_EXTRA doc items back in alphabetical order
A few items appears to have added in random order over the years.
2025-08-05 20:22:32 +02:00
Álvaro Herrera
d185161e47
Hide expensive pg_upgrade test behind PG_TEST_EXTRA
This new test is very expensive.  Make it opt-in.

Discussion: https://postgr.es/m/202508051433.ebznuqrxt4b2@alvherre.pgsql
2025-08-05 20:09:42 +02:00
Jeff Davis
06697909b6 Don't copy datlocale from template unless provider matches.
During CREATE DATABASE, if changing the locale provider, require that
a new locale is specified rather than trying to reinterpret the
template's locale using the new provider.

This only affects the behavior when the template uses the builtin
provider and CREATE DATABASE specifies the ICU provider without
specifying the locale. Previously, that may have succeeded due to
loose validation by ICU, whereas now that will cause an error. Because
it can cause an error, backport only to unreleased versions.

Discussion: https://postgr.es/m/5038b33a6dc639009f4b3d43fa6ae0c5ba9e04f7.camel@j-davis.com
Backpatch-through: 18
2025-08-05 09:23:57 -07:00
Amit Kapila
e5d04aedaf Throw ERROR when publish_generated_columns is specified without a value.
Previously, specifying the publication option 'publish_generated_columns'
without an explicit value would incorrectly default to 'stored', which is
not the intended behavior.

This patch fixes the issue by raising an ERROR when no value is provided
for 'publish_generated_columns', ensuring that users must explicitly
specify a valid option.

Author: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Backpatch-through: 18, where it was introduced
Discussion: https://postgr.es/m/CAHut+PsCUCWiEKmB10DxhoPfXbF6jw5RD9ib2LuaQeA_XraW7w@mail.gmail.com
2025-08-05 09:21:50 +00:00
Andrew Dunstan
7c3a036f5c fix apparent typo in release notes 2025-08-04 15:37:00 -04:00
Melanie Plageman
0e6b791846 Minor test fixes in 035_standby_logical_decoding.pl
Import usleep, which, due to an oversight in oversight in commit
48796a98d5ae was used but not imported.

Correct the comparison string used in two logfile checks. Previously, it
was incorrect and thus the test could never have failed.

Also wordsmith a comment to make it clear when hot_standby_feedback is
meant to be on during the test scenarios.

Reported-by: Melanie Plageman <melanieplageman@gmail.com>
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/flat/CAAKRu_YO2mEm%3DZWZKPjTMU%3DgW5Y83_KMi_1cr51JwavH0ctd7w%40mail.gmail.com
Backpatch-through: 16
2025-08-04 15:07:22 -04:00
Dean Rasheed
347b6a1fff Fix typo in create_index.sql.
Introduced by 578b229718e.

Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Tender Wang <tndrwang@gmail.com>
Discussion: https://postgr.es/m/CAEZATCV_CzRSOPMf1gbHQ7xTmyrV6kE7ViCBD6B81WF7GfTAEA@mail.gmail.com
Backpatch-through: 13
2025-08-04 16:20:31 +01:00
Fujii Masao
2d81a246f4 Avoid unexpected shutdown when sync_replication_slots is enabled.
Previously, enabling sync_replication_slots while wal_level was not set
to logical could cause the server to shut down. This was because
the postmaster performed a configuration check before launching
the slot synchronization worker and raised an ERROR if the settings
were incompatible. Since ERROR is treated as FATAL in the postmaster,
this resulted in the entire server shutting down unexpectedly.

This commit changes the postmaster to log that message with a LOG-level
instead of raising an ERROR, allowing the server to continue running
even with the misconfiguration.

Back-patch to v17, where slot synchronization was introduced.

Reported-by: Hugo DUBOIS <hdubois@scaleway.com>
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Hugo DUBOIS <hdubois@scaleway.com>
Reviewed-by: Shveta Malik <shveta.malik@gmail.com>
Discussion: https://postgr.es/m/CAH0PTU_pc3oHi__XESF9ZigCyzai1Mo3LsOdFyQA4aUDkm01RA@mail.gmail.com
Backpatch-through: 17
2025-08-04 20:52:59 +09:00
Álvaro Herrera
7b1053a577
doc: mention unusability of dropped CHECK to verify NOT NULL
It's possible to use a CHECK (col IS NOT NULL) constraint to skip
scanning a table for nulls when adding a NOT NULL constraint on the same
column.  However, if the CHECK constraint is dropped on the same command
that the NOT NULL is added, this fails, i.e., makes the NOT NULL addition
slow.  The best we can do about it at this stage is to document this so
that users aren't taken by surprise.

(In Postgres 18 you can directly add the NOT NULL constraint as NOT
VALID instead, so there's no longer much use for the CHECK constraint,
therefore no point in building mechanism to support the case better.)

Reported-by: Andrew <psy2000usa@yahoo.com>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/175385113607.786.16774570234342968908@wrigleys.postgresql.org
2025-08-04 13:26:45 +02:00
Fujii Masao
fee46ab4f2 Fix assertion failure in pgbench when handling multiple pipeline sync messages.
Previously, when running pgbench in pipeline mode with a custom script
that triggered retriable errors (e.g., serialization errors),
an assertion failure could occur:

    Assertion failed: (res == ((void*)0)), function discardUntilSync, file pgbench.c, line 3515.

The root cause was that pgbench incorrectly assumed only a single
pipeline sync message would be received at the end. In reality,
multiple pipeline sync messages can be sent and must be handled properly.

This commit fixes the issue by updating pgbench to correctly process
multiple pipeline sync messages, preventing the assertion failure.

Back-patch to v15, where the bug was introduced.

Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Tatsuo Ishii <ishii@postgresql.org>
Discussion: https://postgr.es/m/CAHGQGwFAX56Tfx+1ppo431OSWiLLuW72HaGzZ39NkLkop6bMzQ@mail.gmail.com
Backpatch-through: 15
2025-08-03 10:49:54 +09:00
Jeff Davis
a3e8dc1438 Simplify options in pg_dump and pg_restore.
Remove redundant options --with-data and --with-schema, and rename
--with-statistics to just --statistics.

Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/f379d0aeefe8effe13302a436bc28f549f09e924.camel@j-davis.com
Backpatch-through: 18
2025-08-02 07:51:35 -07:00
Michael Paquier
d0c12b98f2 Fix typo in foreign_key.sql
Introduced by eec0040c4bcd.

Author: Chao Li <lic@highgo.com>
Discussion: https://postgr.es/m/CAEoWx2kKMdtWKQiYNuwG2L41YwHA7G3sUsRfD9esPJwZyX1+Eg@mail.gmail.com
Backpatch-through: 18
2025-08-02 19:54:27 +09:00
Etsuro Fujita
5a900d6482 Doc: clarify the restrictions of AFTER triggers with transition tables.
It was not very clear that the triggers are only allowed on plain tables
(not foreign tables).  Also, rephrase the documentation for better
readability.

Follow up to commit 9e6104c66.

Reported-by: Etsuro Fujita <etsuro.fujita@gmail.com>
Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Etsuro Fujita <etsuro.fujita@gmail.com>
Discussion: https://postgr.es/m/CAPmGK16XBs9ptNr8Lk4f-tJZogf6y-Prz%3D8yhvJbb_4dpsc3mQ%40mail.gmail.com
Backpatch-through: 13
2025-08-02 18:30:00 +09:00
Michael Paquier
11de339aad Fix use-after-free with INSERT ON CONFLICT changes in reorderbuffer.c
In ReorderBufferProcessTXN(), used to send the data of a transaction to
an output plugin, INSERT ON CONFLICT changes (INTERNAL_SPEC_INSERT) are
delayed until a confirmation record arrives (INTERNAL_SPEC_CONFIRM),
updating the change being processed.

8c58624df462 has added an extra step after processing a change to update
the progress of the transaction, by calling the callback
update_progress_txn() based on the LSN stored in a change after a
threshold of CHANGES_THRESHOLD (100) is reached.  This logic has missed
the fact that for an INSERT ON CONFLICT change the data is freed once
processed, hence update_progress_txn() could be called pointing to a LSN
value that's already been freed.  This could result in random crashes,
depending on the workload.

Per discussion, this issue is fixed by reusing in update_progress_txn()
the LSN from the change processed found at the beginning of the loop,
meaning that for a INTERNAL_SPEC_CONFIRM change the progress is updated
using the LSN of the INTERNAL_SPEC_CONFIRM change, and not the LSN from
its INTERNAL_SPEC_INSERT change.  This is actually more correct, as we
want to update the progress to point to the INTERNAL_SPEC_CONFIRM
change.

Masahiko Sawada has found a nice trick to reproduce the issue: hardcode
CHANGES_THRESHOLD at 1 and run test_decoding (test "ddl" being enough)
on an instance running valgrind.  The bug has been analyzed by Ethan
Mertz, who also originally suggested the solution used in this patch.

Issue introduced by 8c58624df462, so backpatch down to v16.

Author: Ethan Mertz <ethan.mertz@gmail.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/aIsQqDZ7x4LAQ6u1@paquier.xyz
Backpatch-through: 16
2025-08-02 17:08:48 +09:00
Nathan Bossart
7b9674a8b1 Allow resetting unknown custom GUCs with reserved prefixes.
Currently, ALTER DATABASE/ROLE/SYSTEM RESET [ALL] with an unknown
custom GUC with a prefix reserved by MarkGUCPrefixReserved() errors
(unless a superuser runs a RESET ALL variant).  This is problematic
for cases such as an extension library upgrade that removes a GUC.
To fix, simply make sure the relevant code paths explicitly allow
it.  Note that we require superuser or privileges on the parameter
to reset it.  This is perhaps a bit more restrictive than is
necessary, but it's not clear whether further relaxing the
requirements is safe.

Oversight in commit 88103567cb.  The ALTER SYSTEM fix is dependent
on commit 2d870b4aef, which first appeared in v17.  Unfortunately,
back-patching that commit would introduce ABI breakage, and while
that breakage seems unlikely to bother anyone, it doesn't seem
worth the risk.  Hence, the ALTER SYSTEM part of this commit is
omitted on v15 and v16.

Reported-by: Mert Alev <mert@futo.org>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Discussion: https://postgr.es/m/18964-ba09dea8c98fccd6%40postgresql.org
Backpatch-through: 15
2025-08-01 16:52:11 -05:00
Jeff Davis
60121890f7 pg_dump: reject combination of "only" and "with"
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/8ce896d1a05040905cc1a3afbc04e94d8e95669a.camel@j-davis.com
Backpatch-through: 18
2025-08-01 10:06:50 -07:00
Heikki Linnakangas
42b1480eb2 libpq: Complain about missing BackendKeyData later with PGgetCancel()
PostgreSQL always sends the BackendKeyData message at connection
startup, but there are some third party backend implementations out
there that don't support cancellation, and don't send the message
[1]. While the protocol docs left it up for interpretation if that is
valid behavior, libpq in PostgreSQL 17 and below accepted it. It does
not seem like the libpq behavior was intentional though, since it did
so by sending CancelRequest messages with all zeros to such servers
(instead of returning an error or making the cancel a no-op).

In version 18 the behavior was changed to return an error when trying
to create the cancel object with PGgetCancel() or PGcancelCreate().
This was done without any discussion, as part of supporting different
lengths of cancel packets for the new 3.2 version of the protocol.

This commit changes the behavior of PGgetCancel() / PGcancel() once
more to only return an error when the cancel object is actually used
to send a cancellation, instead of when merely creating the object.
The reason to do so is that some clients [2] create a cancel object as
part of their connection creation logic (thus having the cancel object
ready for later when they need it), so if creating the cancel object
returns an error, the whole connection attempt fails. By delaying the
error, such clients will still be able to connect to the third party
backend implementations in question, but when actually trying to
cancel a query, the user will be notified that that is not possible
for the server that they are connected to.

This commit only changes the behavior of the older PGgetCancel() /
PQcancel() functions, not the more modern PQcancelCreate() family of
functions.  I.e. PQcancelCreate() returns a failed connection object
(CONNECTION_BAD) if the server didn't send a cancellation key. Unlike
the old PQgetCancel() function, we're not aware of any clients in the
field that use PQcancelCreate() during connection startup in a way
that would prevent connecting to such servers.

[1] AWS RDS Proxy is definitely one of them, and CockroachDB might be
another.

[2] psycopg2 (but not psycopg3).

Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Backpatch-through: 18
Discussion: https://www.postgresql.org/message-id/20250617.101056.1437027795118961504.ishii%40postgresql.org
2025-08-01 18:27:47 +03:00
Amit Kapila
d9f01a287a Fix a deadlock during ALTER SUBSCRIPTION ... DROP PUBLICATION.
A deadlock can occur when the DDL command and the apply worker acquire
catalog locks in different orders while dropping replication origins.

The issue is rare in PG16 and higher branches because, in most cases, the
tablesync worker performs the origin drop in those branches, and its
locking sequence does not conflict with DDL operations.

This patch ensures consistent lock acquisition to prevent such deadlocks.

As per buildfarm.

Reported-by: Alexander Lakhin <exclusion@gmail.com>
Author: Ajin Cherian <itsajin@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 14, where it was introduced
Discussion: https://postgr.es/m/bab95e12-6cc5-4ebb-80a8-3e41956aa297@gmail.com
2025-08-01 07:46:22 +00:00
Tomas Vondra
88914332ea Fix tab completion for ALTER ROLE|USER ... RESET
Commit c407d5426b87 added tab completion for ALTER ROLE|USER ... RESET,
with the intent to offer only the variables actually set on the role.
But as soon as the user started typing something, it would start to
offer all possible matching variables.

Fix this the same way ALTER DATABASE ... RESET does it, i.e. by
properly considering the prefix.

A second issue causing similar symptoms (offering variables that are not
actually set for a role) was caused by a match to another pattern. The
ALTER DATABASE ... RESET was already excluded, so do the same thing for
ROLE/USER.

Report and fix by Dagfinn Ilmari Mannsåker. Backpatch to 18, same as
c407d5426b87.

Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/87qzyghw2x.fsf%40wibble.ilmari.org
Discussion: https://postgr.es/m/87tt4lumqz.fsf%40wibble.ilmari.org
Backpatch-through: 18
2025-07-31 16:05:04 +02:00
Tomas Vondra
72c437f6e4 Schema-qualify unnest() in ALTER DATABASE ... RESET
Commit 9df8727c5067 failed to schema-quality the unnest() call in the
query used to list the variables in ALTER DATABASE ... RESET. If there's
another unnest() function in the search_path, this could cause either
failures, or even security issues (when the tab-completion gets used by
privileged accounts).

Report and fix by Dagfinn Ilmari Mannsåker. Backpatch to 18, same as
9df8727c5067.

Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/87qzyghw2x.fsf%40wibble.ilmari.org
Discussion: https://postgr.es/m/87tt4lumqz.fsf%40wibble.ilmari.org
Backpatch-through: 18
2025-07-31 16:04:55 +02:00
Noah Misch
c0ae03384f Sort dump objects independent of OIDs, for the 7 holdout object types.
pg_dump sorts objects by their logical names, e.g. (nspname, relname,
tgname), before dependency-driven reordering.  That removes one source
of logically-identical databases differing in their schema-only dumps.
In other words, it helps with schema diffing.  The logical name sort
ignored essential sort keys for constraints, operators, PUBLICATION
... FOR TABLE, PUBLICATION ... FOR TABLES IN SCHEMA, operator classes,
and operator families.  pg_dump's sort then depended on object OID,
yielding spurious schema diffs.  After this change, OIDs affect dump
order only in the event of catalog corruption.  While pg_dump also
wrongly ignored pg_collation.collencoding, CREATE COLLATION restrictions
have been keeping that imperceptible in practical use.

Use techniques like we use for object types already having full sort key
coverage.  Where the pertinent queries weren't fetching the ignored sort
keys, this adds columns to those queries and stores those keys in memory
for the long term.

The ignorance of sort keys became more problematic when commit
172259afb563d35001410dc6daad78b250924038 added a schema diff test
sensitive to it.  Buildfarm member hippopotamus witnessed that.
However, dump order stability isn't a new goal, and this might avoid
other dump comparison failures.  Hence, back-patch to v13 (all supported
versions).

Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/20250707192654.9e.nmisch@google.com
Backpatch-through: 13
2025-07-31 06:37:59 -07:00
Nathan Bossart
da103c7bc8 doc: Adjust documentation for vacuumdb --missing-stats-only.
The sentence in question gave readers the impression that vacuumdb
removes statistics for a period of time while analyzing, but it's
actually meant to convey that --analyze-in-stages temporarily
replaces existing statistics with ones generated with lower
statistics targets.

Reported-by: Frédéric Yhuel <frederic.yhuel@dalibo.com>
Reviewed-by: Frédéric Yhuel <frederic.yhuel@dalibo.com>
Reviewed-by: "David G. Johnston" <david.g.johnston@gmail.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Jeff Davis <pgsql@j-davis.com>
Discussion: https://postgr.es/m/4b94ca16-7a6d-4581-b2aa-4ea79dbc082a%40dalibo.com
Backpatch-through: 18
2025-07-30 13:04:47 -05:00
Andrew Dunstan
3a954813a0 Remove release note item for Non text modes for pg_dumpall
The feature has been reverted.
2025-07-30 11:34:57 -04:00
Andrew Dunstan
4a9ee867bf Revert Non text modes for pg_dumpall, and pg_restore support
Recent discussions of the mechanisms used to manage global data have
raised concerns about their robustness and security. Rather than try
to deal with those concerns at a very late stage of the release cycle,
the conclusion is to revert these features and work on them for the
next release.

This reverts parts or all of the following commits:

1495eff7bdb Non text modes for pg_dumpall, correspondingly change pg_restore
5db3bf7391d Clean up from commit 1495eff7bdb
289f74d0cb2 Add more TAP tests for pg_dumpall
2ef57908067 Fix a couple of error messages and tests for them
b52a4a5f285 Clean up error messages from 1495eff7bdb
4170298b6ec Further cleanup for directory creation on pg_dump/pg_dumpall
22cb6d28950 Fix memory leak in pg_restore.c
928394b664b Improve various new-to-v18 appendStringInfo calls
39729ec01d2 Fix fat fingering in 22cb6d28950
5822bf21d50 Add missing space in pg_restore documentation.
f09088a01d3 Free memory properly in pg_restore.c
40b9c27014d pg_restore cleanups
4aad2cb7707 Portability fix: isdigit() must be passed an unsigned char.
88e947136b4 Fix typos and grammar in the code
f60420cff66 doc: Alphabetize long options for pg_dump[all].
bc35adee8d7 doc: Put new options in consistent order on man pages
a876464abc7 Message style improvements
dec6643487b Improve pg_dump/pg_dumpall help synopses and terminology
0ebd2425558 Run pgperltidy

Discussion: https://postgr.es/m/20250708212819.09.nmisch@google.com

Backpatch-to: 18
Reviewed-by: Noah Misch <noah@leadboat.com>
2025-07-30 11:32:16 -04:00
Michael Paquier
cd2d52cc6b Fix ./configure checks with __cpuidex() and __cpuid()
The configure checks used two incorrect functions when checking the
presence of some routines in an environment:
- __get_cpuidex() for the check of __cpuidex().
- __get_cpuid() for the check of __cpuid().
This means that Postgres has never been able to detect the presence of
these functions, impacting environments where these exist, like Windows.

Simply fixing the function name does not work.  For example, using
configure with MinGW on Windows causes the checks to detect all four of
__get_cpuid(), __get_cpuid_count(), __cpuidex() and __cpuid() to be
available, causing a compilation failure as this messes up with the
MinGW headers as we would include both <intrin.h> and <cpuid.h>.

The Postgres code expects only one in { __get_cpuid() , __cpuid() } and
one in { __get_cpuid_count() , __cpuidex() } to exist.  This commit
reshapes the configure checks to do exactly what meson is doing, which
has been working well for us: check one, then the other, but never allow
both to be detected in a given build.

The logic is wrong since 3dc2d62d0486 and 792752af4eb5 where these
checks have been introduced (the second case is most likely a copy-pasto
coming from the first case), with meson documenting that the configure
checks were broken.  As far as I can see, they are not once applied
consistently with what the code expects, but let's see if the buildfarm
has different something to say.  The comment in meson.build is adjusted
as well, to reflect the new reality.

Author: Lukas Fittl <lukas@fittl.com>
Co-authored-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aIgwNYGVt5aRAqTJ@paquier.xyz
Backpatch-through: 13
2025-07-30 11:55:46 +09:00
Bruce Momjian
a60691eb20 doc PG 18 relnotes: update to current
Backpatch-through: 18 only
2025-07-29 22:27:01 -04:00
Heikki Linnakangas
fce7da1e73 Handle cancel requests with PID 0 gracefully
If the client sent a query cancel request with backend PID 0, it
tripped an assertion. With assertions disabled, you got this in the
log instead:

    LOG:  invalid cancel request with PID 0
    LOG:  wrong key in cancel request for process 0

Query cancellations don't even require authentication, so we better
tolerate bogus requests. Fix by turning the assertion into a regular
runtime check.

Spotted while testing libpq behavior with a modified server that
didn't send BackendKeyData to the client.

Backpatch-through: 18
2025-07-30 00:40:15 +03:00
Tom Lane
8e5e3ff556 Don't put library-supplied -L/-I switches before user-supplied ones.
For many optional libraries, we extract the -L and -l switches needed
to link the library from a helper program such as llvm-config.  In
some cases we put the resulting -L switches into LDFLAGS ahead of
-L switches specified via --with-libraries.  That risks breaking
the user's intention for --with-libraries.

It's not such a problem if the library's -L switch points to a
directory containing only that library, but on some platforms a
library helper may "helpfully" offer a switch such as -L/usr/lib
that points to a directory holding all standard libraries.  If the
user specified --with-libraries in hopes of overriding the standard
build of some library, the -L/usr/lib switch prevents that from
happening since it will come before the user-specified directory.

To fix, avoid inserting these switches directly into LDFLAGS during
configure, instead adding them to LIBDIRS or SHLIB_LINK.  They will
still eventually get added to LDFLAGS, but only after the switches
coming from --with-libraries.

The same problem exists for -I switches: those coming from
--with-includes should appear before any coming from helper programs
such as llvm-config.  We have not heard field complaints about this
case, but it seems certain that a user attempting to override a
standard library could have issues.

The changes for this go well beyond configure itself, however,
because many Makefiles have occasion to manipulate CPPFLAGS to
insert locally-desirable -I switches, and some of them got it wrong.
The correct ordering is any -I switches pointing at within-the-
source-tree-or-build-tree directories, then those from the tree-wide
CPPFLAGS, then those from helper programs.  There were several places
that risked pulling in a system-supplied copy of libpq headers, for
example, instead of the in-tree files.  (Commit cb36f8ec2 fixed one
instance of that a few months ago, but this exercise found more.)

The Meson build scripts may or may not have any comparable problems,
but I'll leave it to someone else to investigate that.

Reported-by: Charles Samborski <demurgos@demurgos.net>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/70f2155f-27ca-4534-b33d-7750e20633d7@demurgos.net
Backpatch-through: 13
2025-07-29 15:17:40 -04:00
Tom Lane
d5f014d897 Remove unnecessary complication around xmlParseBalancedChunkMemory.
When I prepared 71c0921b6 et al yesterday, I was thinking that the
logic involving explicitly freeing the node_list output was still
needed to dodge leakage bugs in libxml2.  But I was misremembering:
we introduced that only because with early 2.13.x releases we could
not trust xmlParseBalancedChunkMemory's result code, so we had to
look to see if a node list was returned or not.  There's no reason
to believe that xmlParseBalancedChunkMemory will fail to clean up
the node list when required, so simplify.  (This essentially
completes reverting all the non-cosmetic changes in 6082b3d5d.)

Reported-by: Jim Jones <jim.jones@uni-muenster.de>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/997668.1753802857@sss.pgh.pa.us
Backpatch-through: 13
2025-07-29 12:47:19 -04:00
Alexander Korotkov
1fe9e3822c Clarify documentation for the initcap function
This commit documents differences in the definition of word separators for
the initcap function between libc and ICU locale providers.
Backpatch to all supported branches.

Discussion: https://postgr.es/m/804cc10ef95d4d3b298e76b181fd9437%40postgrespro.ru
Author: Oleg Tselebrovskiy <o.tselebrovskiy@postgrespro.ru>
Backpatch-through: 13
2025-07-29 10:43:28 +03:00
Tom Lane
637ead2e1a Avoid regression in the size of XML input that we will accept.
This mostly reverts commit 6082b3d5d, "Use xmlParseInNodeContext
not xmlParseBalancedChunkMemory".  It turns out that
xmlParseInNodeContext will reject text chunks exceeding 10MB, while
(in most libxml2 versions) xmlParseBalancedChunkMemory will not.
The bleeding-edge libxml2 bug that we needed to work around a year
ago is presumably no longer a factor, and the argument that
xmlParseBalancedChunkMemory is semi-deprecated is not enough to
justify a functionality regression.  Hence, go back to doing it
the old way.

Reported-by: Michael Paquier <michael@paquier.xyz>
Author: Michael Paquier <michael@paquier.xyz>
Co-authored-by: Erik Wienhold <ewie@ewie.name>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aIGknLuc8b8ega2X@paquier.xyz
Backpatch-through: 13
2025-07-28 16:50:41 -04:00
Robert Haas
44e135ad57 Avoid throwing away the error message in syncrep_yyerror.
Commit 473a575e05979b4dbb28b3f2544f4ec8f184ce65 purported to make this
function stash the error message in *syncrep_parse_result_p, but
it didn't actually.

As a result, an attempt to set synchronous_standby_names to any value
that does not parse resulted in a generic "parser failed." message
rather than anything more specific. This fixes that.

Discussion: http://postgr.es/m/CA+TgmoYF9wPNZ-Q_EMfib_espgHycY-eX__6Tzo2GpYpVXqCdQ@mail.gmail.com
Backpatch-through: 18
2025-07-28 10:57:10 -04:00
Michael Paquier
13eb6bb76d Fix performance regression with flush of pending fixed-numbered stats
The callback added in fc415edf8ca8 used to check if there is any pending
data to flush for fixed-numbered statistics, done by looping across all
the builtin and custom stats kinds with a call to have_fixed_pending_cb,
is proving to able to show in workloads that do not report any stats
(read-only, no function calls, no WAL, no IO, etc).  The code used in
v17 was cheaper than that what HEAD has introduced, relying on three
boolean checks for WAL, SLRU and IO stats.

This commit switches the code to use a more efficient approach than
fc415edf8ca8, with a single boolean flag that can be switched to "true"
by any fixed-numbered stats kinds to force pgstat_report_stat() to go
through one round of reports.  The flag is reset by pgstat_report_stat()
once a full round of reports is done.  The flag being false means that
fixed-numbered stats kinds saw no activity, and that there is no pending
data to flush.

ac000fca743e took one step in improving the performance by reducing the
number of stats kinds that the backend can hold.  This commit takes a
more drastic step by bringing back the code efficiency to what it was
before v18 with a cheap check at the beginning of pgstat_report_stat()
for its fast-exit path.

The callback have_static_pending_cb is removed as an effect of all that.

Reported-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/eb224uegsga2hgq7dfq3ps5cduhpqej7ir2hjxzzozjthrekx5@dysei6buqthe
Backpatch-through: 18
2025-07-28 08:15:16 +09:00
Alexander Korotkov
bae5078217 Limit checkpointer requests queue size
If the number of sync requests is big enough, the palloc() call in
AbsorbSyncRequests() will attempt to allocate more than 1 GB of memory,
resulting in failure.  This can lead to an infinite loop in the checkpointer
process, as it repeatedly fails to absorb the pending requests.

This commit limits the checkpointer requests queue size to 10M items. In
addition to preventing the palloc() failure, this change helps to avoid long
queue processing time.

Also, this commit is for backpathing only.  The master branch receives
a more invasive yet comprehensive fix for this problem.

Discussion: https://postgr.es/m/db4534f83a22a29ab5ee2566ad86ca92%40postgrespro.ru
Backpatch-through: 13
2025-07-27 15:10:02 +03:00
Fujii Masao
75f633f54a Fix background worker not restarting after crash-and-restart cycle.
Previously, if a background worker crashed (e.g., due to a SIGKILL) and
the server restarted due to restart_after_crash being enabled,
the worker was not restarted as expected. Background workers without
the never-restart flag should automatically restart in this case.

This issue was introduced in commit 28a520c0b77, which failed to reset
the rw_pid field in the RegisteredBgWorker struct for the crashed worker.

This commit fixes the problem by resetting rw_pid for all eligible
background workers during the crash-and-restart cycle.

Back-patched to v18, where the bug was introduced.

Bug fix patches were proposed by Andrey Rudometov and ChangAo Chen,
but this commit uses a different approach.

Reported-by: Andrey Rudometov <unlimitedhikari@gmail.com>
Reported-by: ChangAo Chen <cca5507@qq.com>
Author: Andrey Rudometov <unlimitedhikari@gmail.com>
Author: ChangAo Chen <cca5507@qq.com>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: ChangAo Chen <cca5507@qq.com>
Reviewed-by: Shveta Malik <shveta.malik@gmail.com>
Discussion: https://postgr.es/m/CAF6JsWiO=i24qYitWe6ns1sXqcL86rYxdyU+pNYk-WueKPSySg@mail.gmail.com
Discussion: https://postgr.es/m/tencent_E00A056B3953EE6440F0F40F80EC30427D09@qq.com
Backpatch-through: 18
2025-07-25 18:40:49 +09:00
Michael Paquier
f7dfccf960 Fix assertion failure with latch wait in single-user mode
LatchWaitSetPostmasterDeathPos, the latch event position for the
postmaster death event, is initialized under IsUnderPostmaster.
WaitLatch() considered it as a valid wait target in single-user mode
(!IsUnderPostmaster), which was incorrect.

One code path found to fail with an assertion failure is a database drop
in single-user mode while waiting in WaitForProcSignalBarrier() after
the drop.

Oversight in commit 84e5b2f07a5e.

Author: Patrick Stählin <me@packi.ch>
Co-authored-by: Ronan Dunklau <ronan.dunklau@aiven.io>
Discussion: https://postgr.es/m/18996-3a2744c8140488de@postgresql.org
Backpatch-through: 18
2025-07-25 16:17:31 +09:00
Michael Paquier
2973b1cd3a Lower bounds related to pgstats kinds
This commit changes stats kinds to have the following bounds, making
their handling in core cheaper by default:
- PGSTAT_KIND_CUSTOM_MIN 128 -> 24
- PGSTAT_KIND_MAX 256 -> 32

The original numbers were rather high, and showed an impact on
performance in pgstat_report_stat() for the case of simple queries with
its early-exit path if there are no pending statistics to flush.  This
logic will be improved more in a follow-up commit to bring the
performance of pgstat_report_stat() on par with v17 and older versions.
Lowering the bounds is a change worth doing on its own, independently of
the other improvement.

These new numbers should be enough to leave some room for the following
years for built-in and custom stats kinds, with stable ID numbers.  At
least that should be enough to start with this facility for extension
developers.  It can be always increased in the tree depending on the
requirements wanted.

Per discussion with Andres Freund and Bertrand Drouvot.

Discussion: https://postgr.es/m/eb224uegsga2hgq7dfq3ps5cduhpqej7ir2hjxzzozjthrekx5@dysei6buqthe
Backpatch-through: 18
2025-07-25 11:17:51 +09:00
Amit Kapila
33f74b806c Fix duplicate transaction replay during pg_createsubscriber.
Previously, the tool could replay the same transaction twice, once during
recovery, then again during replication after the subscriber was set up.

This occurred because the same recovery_target_lsn was used both to
finalize recovery and to start replication. If
recovery_target_inclusive = true, the transaction at that LSN would be
applied during recovery and then sent again by the publisher leading to
duplication.

To prevent this, we now set recovery_target_inclusive = false. This
ensures the transaction at recovery_target_lsn is not reapplied during
recovery, avoiding duplication when replication begins.

Bug #18897
Reported-by: Zane Duffield <duffieldzane@gmail.com>
Author: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 17, where it was introduced
Discussion: https://postgr.es/m/18897-d3db67535860dddb@postgresql.org
2025-07-24 08:50:40 +00:00
Fujii Masao
a8acfb133c doc: Add missing index entries and fix title formatting in pg_buffercache docs.
This commit adds missing index entries for the functions pg_buffercache_numa()
and pg_buffercache_usage_counts() in the pg_buffercache documentation.

It also makes the function titles consistent by adding parentheses after
function names where they were previously missing.

Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/7d19af4b-7da3-4862-9f52-ff958960bd8d@oss.nttdata.com
Backpatch-through: 18
2025-07-24 11:44:22 +09:00
Tom Lane
3d039b53a1 Fix build breakage on Solaris-alikes with late-model GCC.
Solaris has never bothered to add "const" to the second argument
of PAM conversation procs, as all other Unixen did decades ago.
This resulted in an "incompatible pointer" compiler warning when
building --with-pam, but had no more serious effect than that,
so we never did anything about it.  However, as of GCC 14 the
case is an error not warning by default.

To complicate matters, recent OpenIndiana (and maybe illumos
in general?) *does* supply the "const" by default, so we can't
just assume that platforms using our solaris template need help.

What we can do, short of building a configure-time probe,
is to make solaris.h #define _PAM_LEGACY_NONCONST, which
causes OpenIndiana's pam_appl.h to revert to the traditional
definition, and hopefully will have no effect anywhere else.
Then we can use that same symbol to control whether we include
"const" in the declaration of pam_passwd_conv_proc().

Bug: #18995
Reported-by: Andrew Watkins <awatkins1966@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18995-82058da9ab4337a7@postgresql.org
Backpatch-through: 13
2025-07-23 15:44:29 -04:00
Andres Freund
7b98c55368 aio: Fix assertion, clarify README
The assertion wouldn't have triggered for a long while yet, but this won't
accidentally fail to detect the issue if/when it occurs.

Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Discussion: https://postgr.es/m/CAEze2Wj-43JV4YufW23gm=Uwr7Lkj+p0yKctKHxNm1rwFC+_DQ@mail.gmail.com
Backpatch-through: 18
2025-07-22 08:32:14 -04:00
Amit Kapila
0e8c656551 Doc: Fix logical replication examples.
The definition of \dRp+ was modified in commit 7054186c4e. This patch
updates the column list and row filter examples to align with the revised
definition.

Author: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed by: Peter Smith <smithpb2250@gmail.com>
Backpatch-through: 18, where it was introduced
Discussion: https://postgr.es/m/CANhcyEUvqkSO6b9zi_fs_BBPEge5acj4mf8QKmq2TX-7axa7EQ@mail.gmail.com
2025-07-22 05:56:22 +00:00
Michael Paquier
282b10cb05 doc: Inform about aminsertcleanup optional NULLness
This index AM callback has been introduced in c1ec02be1d79 and it is
optional, currently only being used by BRIN.  Optional callbacks are
documented with NULL as possible value in amapi.h and indexam.sgml, but
this callback has missed this part of the description.

Reported-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Japin Li <japinli@hotmail.com>
Discussion: https://postgr.es/m/CAHut+PvgYcPmPDi1YdHMJY5upnyGRpc0N8pk1xNB11xDSBwNog@mail.gmail.com
Backpatch-through: 17
2025-07-22 14:34:19 +09:00
Michael Paquier
0ded7615d8 ecpg: Fix NULL pointer dereference during connection lookup
ECPGconnect() caches established connections to the server, supporting
the case of a NULL connection name when a database name is not specified
by its caller.

A follow-up call to ECPGget_PGconn() to get an established connection
from the cached set with a non-NULL name could cause a NULL pointer
dereference if a NULL connection was listed in the cache and checked for
a match.  At least two connections are necessary to reproduce the issue:
one with a NULL name and one with a non-NULL name.

Author:  Aleksander Alekseev <aleksander@tigerdata.com>
Discussion: https://postgr.es/m/CAJ7c6TNvFTPUTZQuNAoqgzaSGz-iM4XR61D7vEj5PsQXwg2RyA@mail.gmail.com
Backpatch-through: 13
2025-07-22 14:00:04 +09:00
Álvaro Herrera
f9545e95c5
pg_dump: include comments on not-null constraints on domains, too
Commit e5da0fe3c22b introduced catalog entries for not-null constraints
on domains; but because commit b0e96f311985 (the original work for
catalogued not-null constraints on tables) forgot to teach pg_dump to
process the comments for them, this one also forgot.  Add that now.

We also need to teach repairDependencyLoop() about the new type of
constraints being possible for domains.

Backpatch-through: 17
Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Reported-by: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxF-0bqVR=j4jonS6N2Ka6hHUpFyu3_3TWKNhOW_4yFSSg@mail.gmail.com
2025-07-21 11:34:10 +02:00
Fujii Masao
6cf5b10ce9 doc: Document reopen of output file via SIGHUP in pg_recvlogical.
When pg_recvlogical receives a SIGHUP signal, it closes the current
output file and reopens a new one. This is useful since it allows us to
rotate the output file by renaming the current file and sending a SIGHUP.

This behavior was previously undocumented. This commit adds
the missing documentation.

Back-patch to all supported versions.

Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Shinya Kato <shinya11.kato@gmail.com>
Discussion: https://postgr.es/m/0977fc4f-1523-4ecd-8a0e-391af4976367@oss.nttdata.com
Backpatch-through: 13
2025-07-20 11:59:46 +09:00
Alexander Korotkov
226c567454 Reintroduce test 046_checkpoint_logical_slot
This commit is only for HEAD and v18, where the test has been removed.
It also incorporates improvements below to stability and coverage of the
original test, which were already backpatched to v17.
- Add one pg_logical_emit_message() call to force the creation of a record
  that spawns across two pages.
- Make the logic wait for the checkpoint completion.

Author: Alexander Korotkov <akorotkov@postgresql.org>
Co-authored-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Backpatch-through: 18
2025-07-19 13:59:27 +03:00
Alexander Korotkov
c71c702f06 Improve the stability of the recovery test 047_checkpoint_physical_slot
Currently, the comments in 047_checkpoint_physical_slot. It shows an
incomplete intention to wait for checkpoint completion before performing
an immediate database stop.  However, an immediate node stop can occur both
before and after checkpoint completion.  Both cases should work correctly.
But we would like the test to be more stable and deterministic.  This is why
this commit makes this test explicitly wait for the checkpoint completion
log message.

Discussion: https://postgr.es/m/CAPpHfdurV-j_e0pb%3DUFENAy3tyzxfF%2ByHveNDNQk2gM82WBU5A%40mail.gmail.com
Discussion: https://postgr.es/m/aHXLep3OaX_vRTNQ%40paquier.xyz
Author: Alexander Korotkov <akorotkov@postgresql.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Backpatch-through: 17
2025-07-19 13:51:30 +03:00
Alexander Korotkov
5449d5b7ae Fix infinite wait when reading a partially written WAL record
If a crash occurs while writing a WAL record that spans multiple pages, the
recovery process marks the page with the XLP_FIRST_IS_OVERWRITE_CONTRECORD
flag.  However, logical decoding currently attempts to read the full WAL
record based on its expected size before checking this flag, which can lead
to an infinite wait if the remaining data is never written (e.g., no activity
after crash).

This patch updates the logic first to read the page header and check for
the XLP_FIRST_IS_OVERWRITE_CONTRECORD flag before attempting to reconstruct
the full WAL record.  If the flag is set, decoding correctly identifies
the record as incomplete and avoids waiting for WAL data that will never
arrive.

Discussion: https://postgr.es/m/CAAKRu_ZCOzQpEumLFgG_%2Biw3FTa%2BhJ4SRpxzaQBYxxM_ZAzWcA%40mail.gmail.com
Discussion: https://postgr.es/m/CALDaNm34m36PDHzsU_GdcNXU0gLTfFY5rzh9GSQv%3Dw6B%2BQVNRQ%40mail.gmail.com
Author: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Backpatch-through: 13
2025-07-19 13:44:30 +03:00
Dean Rasheed
27c7c11366 Fix concurrent update trigger issues with MERGE in a CTE.
If a MERGE inside a CTE attempts an UPDATE or DELETE on a table with
BEFORE ROW triggers, and a concurrent UPDATE or DELETE happens, the
merge code would fail (crashing in the case of an UPDATE action, and
potentially executing the wrong action for a DELETE action).

This is the same issue that 9321c79c86 attempted to fix, except now
for a MERGE inside a CTE. As noted in 9321c79c86, what needs to happen
is for the trigger code to exit early, returning the TM_Result and
TM_FailureData information to the merge code, if a concurrent
modification is detected, rather than attempting to do an EPQ
recheck. The merge code will then do its own rechecking, and rescan
the action list, potentially executing a different action in light of
the concurrent update. In particular, the trigger code must never call
ExecGetUpdateNewTuple() for MERGE, since that is bound to fail because
MERGE has its own per-action projection information.

Commit 9321c79c86 did this using estate->es_plannedstmt->commandType
in the trigger code to detect that a MERGE was being executed, which
is fine for a plain MERGE command, but does not work for a MERGE
inside a CTE. Fix by passing that information to the trigger code as
an additional parameter passed to ExecBRUpdateTriggers() and
ExecBRDeleteTriggers().

Back-patch as far as v17 only, since MERGE cannot appear inside a CTE
prior to that. Additionally, take care to preserve the trigger ABI in
v17 (though not in v18, which is still in beta).

Bug: #18986
Reported-by: Yaroslav Syrytsia <me@ys.lc>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/18986-e7a8aac3d339fa47@postgresql.org
Backpatch-through: 17
2025-07-18 09:59:40 +01:00
Tom Lane
bfa9b25c94 Fix PQport to never return NULL unless the connection is NULL.
This is the documented behavior, and it worked that way before
v10.  However, addition of the connhost[] array created cases
where conn->connhost[conn->whichhost].port is NULL.  The rest
of libpq is careful to substitute DEF_PGPORT[_STR] for a null
or empty port string, but we failed to do so here, leading to
possibly returning NULL.  As of v18 that causes psql's \conninfo
command to segfault.  Older psql versions avoid that, but it's
pretty likely that other clients have trouble with this,
so we'd better back-patch the fix.

In stable branches, just revert to our historical behavior of
returning an empty string when there was no user-given port
specification.  However, it seems substantially more useful and
indeed more correct to hand back DEF_PGPORT_STR in such cases,
so let's make v18 and master do that.

Author: Daniele Varrazzo <daniele.varrazzo@gmail.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CA+mi_8YTS8WPZPO0PAb2aaGLwHuQ0DEQRF0ZMnvWss4y9FwDYQ@mail.gmail.com
Backpatch-through: 13
2025-07-17 12:46:57 -04:00
Álvaro Herrera
e0d3f3cfb6
Remove assertion from PortalRunMulti
We have an assertion to ensure that a command tag has been assigned by
the time we're done executing, but if we happen to execute a command
with no queries, the assertion would fail.  Per discussion, rather than
contort things to get a tag assigned, just remove the assertion.

Oversight in 2f9661311b83.  That commit also retained a comment that
explained logic that had been adjacent to it but diffused into various
places, leaving none apt to keep part of the comment.  Remove that part,
and rewrite what remains for extra clarity.

Bug: #18984
Backpatch-through: 13
Reported-by: Aleksander Alekseev <aleksander@tigerdata.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Michaël Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/18984-0f4778a6599ac3ae@postgresql.org
2025-07-17 17:40:22 +02:00
Nathan Bossart
c4b5cd0956 doc: Add note about how to use pg_overexplain.
This commit adds a note to the pg_overexplain page that describes
how to use it (LOAD, session_preload_libraries, or
shared_preload_libraries).  The new text is mostly lifted from the
auto_explain page.  We should probably consider centralizing this
information in the future.

While at it, add a missing "module" to the opening sentence.

Reviewed-by: "David G. Johnston" <david.g.johnston@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://postgr.es/m/aHVWKM8l8kLlZzgv%40nathan
Backpatch-through: 18
2025-07-17 10:25:59 -05:00
Amit Langote
02d21cfd4b Remove duplicate line
In 231b7d670b21, while copy-pasting some code into
ExecEvalJsonCoercionFinish(), I (amitlan) accidentally introduced
a duplicate line.  Remove it.

Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxHcf=BpmRAJcjgfjOUfV76MwKnyz1x3ErXsWL26EAFmng@mail.gmail.com
2025-07-17 14:29:53 +09:00
Michael Paquier
4fcbe06aa8 Fix inconsistent LWLock tranche names for MultiXact*
The terms used in wait_event_names.txt and lwlock.c were inconsistent
for MultiXactOffsetSLRU and MultiXactMemberSLRU, which could cause joins
between pg_wait_events and pg_stat_activity to fail.  lwlock.c is
adjusted in this commit to what the historical name of the event has
always been, and what is documented.

Oversight in 53c2a97a9266.  08b9b9e043bb has fixed a similar
inconsistency some time ago.

Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/aHdxN0D0hKXzHFQG@ip-10-97-1-34.eu-west-3.compute.internal
Backpatch-through: 17
2025-07-17 09:32:49 +09:00
Daniel Gustafsson
409c63f9f6 doc: Add example file for COPY
The paragraph for introducing INSERT and COPY discussed how a file
could be used for bulk loading with COPY, without actually showing
what the file would look like.  This adds a programlisting for the
file contents.

Backpatch to all supported branches since this example has lacked
the file contents since PostgreSQL 7.2.

Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/158017814191.19852.15019251381150731439@wrigleys.postgresql.org
Backpatch-through: 13
2025-07-17 00:21:18 +02:00
Álvaro Herrera
dca0e9693b
Fix dumping of comments on invalid constraints on domains
We skip dumping constraints together with domains if they are invalid
('separate') so that they appear after data -- but their comments were
dumped together with the domain definition, which in effect leads to the
comment being dumped when the constraint does not yet exist.  Delay
them in the same way.

Oversight in 7eca575d1c28; backpatch all the way back.

Author: jian he <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxF_C2pe6J_+nPr6C5jf5rQnbYP8XOKr4HM8yHZtp2aQqQ@mail.gmail.com
2025-07-16 19:22:53 +02:00
Jeff Davis
973caf7291 pg_dumpall: Skip global objects with --statistics-only or --no-schema.
Previously, pg_dumpall would still dump global objects such as roles
and tablespaces even when --statistics-only or --no-schema was specified.
Since these global objects are treated as schema-level data, they should
be skipped in these cases.

This commit fixes the issue by ensuring that global objects are not
dumped when either --statistics-only or --no-schema is used.

Author: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://postgr.es/m/08129593-6f3c-4fb9-94b7-5aa2eefb99b0@oss.nttdata.com
Backpatch-through: 18
2025-07-16 09:57:07 -07:00
Nathan Bossart
40c66f8585 psql: Fix note on project naming in output of \copyright.
This adjusts the wording to match the changes in commits
5987553fde, a233a603ba, and pgweb commit 2d764dbc08.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/aHVo791guQR6uqwT%40nathan
Backpatch-through: 13
2025-07-16 11:50:34 -05:00
Fujii Masao
ac7c044831 doc: Fix confusing description of streaming option in START_REPLICATION.
Previously, the documentation described the streaming option as a boolean,
which is outdated since it's no longer a boolean as of protocol version 4.
This could confuse users.

This commit updates the description to remove the "boolean" reference and
clearly list the valid values for the streaming option.

Back-patch to v16, where the streaming option changed to a non-boolean.

Author: Euler Taveira <euler@eulerto.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/8d21fb98-5c25-4dee-8387-e5a62b01ea7d@app.fastmail.com
Backpatch-through: 16
2025-07-16 08:34:16 +09:00
Fujii Masao
da9a888da2 doc: Clarify that total_vacuum_time excludes VACUUM FULL.
The last_vacuum and vacuum_count fields in pg_stat_all_tables already
state that they do not include VACUUM FULL. However, total_vacuum_time,
which also excludes VACUUM FULL, did not mention this. This could
mislead users into thinking VACUUM FULL time is included.

To address this, this commit updates the documentation for
pg_stat_all_tables to explicitly state that total_vacuum_time does not
count VACUUM FULL.

Back-patched to v18, where total_vacuum_time was introduced.

Additionally, this commit clarifies that n_ins_since_vacuum also
excludes VACUUM FULL. Although n_ins_since_vacuum was added in v13,
we are not back-patching this change to stable branches, as it is
a documentation improvement, not a bug fix.

Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Discussion: https://postgr.es/m/2ac375d1-591b-4f1b-a2af-f24335567866@oss.nttdata.com
Backpatch-through: 18
2025-07-16 08:06:17 +09:00
Tom Lane
f8ce5dea43 Doc: clarify description of regexp fields in pg_ident.conf.
The grammar was a little shaky and confusing here, so word-smith it
a bit.  Also, adjust the comments in pg_ident.conf.sample to use the
same terminology as the SGML docs, in particular "DATABASE-USERNAME"
not "PG-USERNAME".

Back-patch appropriate subsets.  I did not risk changing
pg_ident.conf.sample in released branches, but it still seems OK
to change it in v18.

Reported-by: Alexey Shishkin <alexey.shishkin@enterprisedb.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/175206279327.3157504.12519088928605422253@wrigleys.postgresql.org
Backpatch-through: 13
2025-07-15 18:53:00 -04:00
Tom Lane
0b6dfce0ce Silence uninitialized-value warnings in compareJsonbContainers().
Because not every path through JsonbIteratorNext() sets val->type,
some compilers complain that compareJsonbContainers() is comparing
possibly-uninitialized values.  The paths that don't set it return
WJB_DONE, WJB_END_ARRAY, or WJB_END_OBJECT, so it's clear by
manual inspection that the "(ra == rb)" code path is safe, and
indeed we aren't seeing warnings about that.  But the (ra != rb)
case is much less obviously safe.  In Assert-enabled builds it
seems that the asserts rejecting WJB_END_ARRAY and WJB_END_OBJECT
persuade gcc 15.x not to warn, which makes little sense because
it's impossible to believe that the compiler can prove of its
own accord that ra/rb aren't WJB_DONE here.  (In fact they never
will be, so the code isn't wrong, but why is there no warning?)
Without Asserts, the appearance of warnings is quite unsurprising.

We discussed fixing this by converting those two Asserts into
pg_assume, but that seems not very satisfactory when it's so unclear
why the compiler is or isn't warning: the warning could easily
reappear with some other compiler version.  Let's fix it in a less
magical, more future-proof way by changing JsonbIteratorNext()
so that it always does set val->type.  The cost of that should be
pretty negligible, and it makes the function's API spec less squishy.

Reported-by: Erik Rijkers <er@xs4all.nl>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/988bf1bc-3f1f-99f3-bf98-222f1cd9dc5e@xs4all.nl
Discussion: https://postgr.es/m/0c623e8a204187b87b4736792398eaf1@postgrespro.ru
Backpatch-through: 13
2025-07-15 18:11:18 -04:00
Tom Lane
c33e55ac91 Doc: clarify description of current-date/time functions.
Minor wordsmithing of the func.sgml paragraph describing
statement_timestamp() and allied functions: don't switch between
"statement" and "command" when those are being used to mean about
the same thing.

Also, add some text to protocol.sgml describing the perhaps-surprising
behavior these functions have in a multi-statement Query message.

Reported-by: P M <petermittere@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/175223006802.3157505.14764328206246105568@wrigleys.postgresql.org
Backpatch-through: 13
2025-07-15 16:35:56 -04:00
Tom Lane
8bd92fc514 Stamp 18beta2. 2025-07-14 16:12:49 -04:00
Peter Eisentraut
3c9aafb775 Translation updates
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: ef3b54be7d834f5f96cb7c86bdbeb1758cfbf583
2025-07-14 13:54:38 +03:00
Thomas Munro
7d11f36e71 aio: Fix configuration reload in IO workers.
method_worker.c installed SignalHandlerForConfigReload, but it failed to
actually process reload requests.  That hasn't yet produced any concrete
problem reports in terms of GUC changes it should have cared about in
v18, but it was inconsistent.

It did cause problems for a couple of patches in development that need
IO workers to react to ALTER SYSTEM + pg_reload_conf().  Fix extracted
from one of those patches.

Back-patch to 18.

Reported-by: Dmitry Dolgov <9erthalion6@gmail.com>
Discussion: https://postgr.es/m/sh5uqe4a4aqo5zkkpfy5fobe2rg2zzouctdjz7kou4t74c66ql%40yzpkxb7pgoxf
2025-07-12 16:34:06 +12:00
Thomas Munro
b4c19da93a aio: Remove obsolete IO worker ID references.
In an ancient ancestor of this code, the postmaster assigned IDs to IO
workers.  Now it tracks them in an unordered array and doesn't know
their IDs, so it might be confusing to readers that it still referred to
their indexes as IDs.

No change in behavior, just variable name and error message cleanup.

Back-patch to 18.

Discussion: https://postgr.es/m/CA%2BhUKG%2BwbaZZ9Nwc_bTopm4f-7vDmCwLk80uKDHj9mq%2BUp0E%2Bg%40mail.gmail.com
2025-07-12 14:45:36 +12:00
Thomas Munro
b2afb06763 aio: Regularize IO worker internal naming.
Adopt PgAioXXX convention for pgaio module type names.  Rename a
function that didn't use a pgaio_worker_ submodule prefix.  Rename the
internal submit function's arguments to match the indirectly relevant
function pointer declaration and nearby examples.  Rename the array of
handle IDs in PgAioSubmissionQueue to sqes, a term of art seen in the
systems it emulates, also clarifying that they're not IO handle
pointers as the old name might imply.

No change in behavior, just type, variable and function name cleanup.

Back-patch to 18.

Discussion: https://postgr.es/m/CA%2BhUKG%2BwbaZZ9Nwc_bTopm4f-7vDmCwLk80uKDHj9mq%2BUp0E%2Bg%40mail.gmail.com
2025-07-12 14:45:34 +12:00
Thomas Munro
20b8b5dab9 Fix stale idle flag when IO workers exit.
Otherwise we could choose a worker that has exited and crash while
trying to wake it up.

Back-patch to 18.

Reported-by: Tomas Vondra <tomas@vondra.me>
Reported-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/t5aqjhkj6xdkido535pds7fk5z4finoxra4zypefjqnlieevbg%40357aaf6u525j
2025-07-12 13:14:22 +12:00
Tom Lane
ccacaf4fae Fix inconsistent quoting of role names in ACLs.
getid() and putid(), which parse and deparse role names within ACL
input/output, applied isalnum() to see if a character within a role
name requires quoting.  They did this even for non-ASCII characters,
which is problematic because the results would depend on encoding,
locale, and perhaps even platform.  So it's possible that putid()
could elect not to quote some string that, later in some other
environment, getid() will decide is not a valid identifier, causing
dump/reload or similar failures.

To fix this in a way that won't risk interoperability problems
with unpatched versions, make getid() treat any non-ASCII as a
legitimate identifier character (hence not requiring quotes),
while making putid() treat any non-ASCII as requiring quoting.
We could remove the resulting excess quoting once we feel that
no unpatched servers remain in the wild, but that'll be years.

A lesser problem is that getid() did the wrong thing with an input
consisting of just two double quotes ("").  That has to represent an
empty string, but getid() read it as a single double quote instead.
The case cannot arise in the normal course of events, since we don't
allow empty-string role names.  But let's fix it while we're here.

Although we've not heard field reports of problems with non-ASCII
role names, there's clearly a hazard there, so back-patch to all
supported versions.

Reported-by: Peter Eisentraut <peter@eisentraut.org>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/3792884.1751492172@sss.pgh.pa.us
Backpatch-through: 13
2025-07-11 18:50:13 -04:00
Jacob Champion
3d23f68c55 oauth: Run Autoconf tests with correct compiler flags
Commit b0635bfda split off the CPPFLAGS/LDFLAGS/LDLIBS for libcurl into
their own separate Makefile variables, but I neglected to move the
existing AC_CHECKs for Curl into a place where they would make use of
those variables. They instead tested the system libcurl, which 1) is
unhelpful if a different Curl is being used for the build and 2) will
fail the build entirely if no system libcurl exists. Correct the order
of operations here.

Reported-by: Ivan Kush <ivan.kush@tantorlabs.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Ivan Kush <ivan.kush@tantorlabs.com>
Discussion: https://postgr.es/m/8a611028-51a1-408c-b592-832e2e6e1fc9%40tantorlabs.com
Backpatch-through: 18
2025-07-11 10:26:18 -07:00
Amit Kapila
f36e577451 Fix the handling of two GUCs during upgrade.
Previously, the check_hook functions for max_slot_wal_keep_size and
idle_replication_slot_timeout would incorrectly raise an ERROR for values
set in postgresql.conf during upgrade, even though those values were not
actively used in the upgrade process.

To prevent logical slot invalidation during upgrade, we used to set
special values for these GUCs. Now, instead of relying on those values, we
directly prevent WAL removal and logical slot invalidation caused by
max_slot_wal_keep_size and idle_replication_slot_timeout.

Note: PostgreSQL 17 does not include the idle_replication_slot_timeout
GUC, so related changes were not backported.

BUG #18979
Reported-by: jorsol <jorsol@gmail.com>
Author: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed by: vignesh C <vignesh21@gmail.com>
Reviewed by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Backpatch-through: 17, where it was introduced
Discussion: https://postgr.es/m/219561.1751826409@sss.pgh.pa.us
Discussion: https://postgr.es/m/18979-a1b7fdbb7cd181c6@postgresql.org
2025-07-11 10:28:29 +05:30
Tatsuo Ishii
a1973e5466 Doc: fix outdated protocol version.
In the description of StartupMessage, the protocol version was left
3.0. This commit updates it to 3.2.

Author: Tatsuo Ishii <ishii@postgresql.org>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/20250626.155608.568829483879866256.ishii%40postgresql.org
2025-07-11 10:22:09 +09:00
Fujii Masao
afb64a56d9 doc: Clarify meaning of "idle" in idle_replication_slot_timeout.
This commit updates the documentation to clarify that "idle" in
idle_replication_slot_timeout means the replication slot is inactive,
that is, not currently used by any replication connection.

Without this clarification, "idle" could be misinterpreted to mean
that the slot is not advancing or that no data is being streamed,
even if a connection exists.

Back-patch to v18 where idle_replication_slot_timeout was added.

Author: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Gunnar Morling <gunnar.morling@googlemail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CADGJaX_0+FTguWpNSpgVWYQP_7MhoO0D8=cp4XozSQgaZ40Odw@mail.gmail.com
Backpatch-through: 18
2025-07-11 08:45:56 +09:00
Fujii Masao
37c76aeb9a Change unit of idle_replication_slot_timeout to seconds.
Previously, the idle_replication_slot_timeout parameter used minutes
as its unit, based on the assumption that values would typically exceed
one minute in production environments. However, this caused unexpected
behavior: specifying a value below 30 seconds would round down to 0,
effectively disabling the timeout. This could be surprising to users.

To allow finer-grained control and avoid such confusion, this commit changes
the unit of idle_replication_slot_timeout to seconds. Larger values can
still be specified easily using standard time suffixes, for example,
'24h' for 24 hours.

Back-patch to v18 where idle_replication_slot_timeout was added.

Reported-by: Gunnar Morling <gunnar.morling@googlemail.com>
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CADGJaX_0+FTguWpNSpgVWYQP_7MhoO0D8=cp4XozSQgaZ40Odw@mail.gmail.com
Backpatch-through: 18
2025-07-11 08:42:16 +09:00
Daniel Gustafsson
39f01083fa Fix sslkeylogfile error handling logging
When sslkeylogfile has been set but the file fails to open in an
otherwise successful connection, the log entry added to the conn
object is never printed.  Instead print the error on stderr for
increased visibility.  This is a debugging tool so using stderr
for logging is appropriate.  Also while there, remove the umask
call in the callback as it's not useful.

Issues noted by Peter Eisentraut in post-commit review, backpatch
down to 18 when support for sslkeylogfile was added

Author: Daniel Gustafsson <daniel@yesql.se>
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/70450bee-cfaa-48ce-8980-fc7efcfebb03@eisentraut.org
Backpatch-through: 18
2025-07-10 23:26:51 +02:00
Nathan Bossart
36026b0fe3 pg_dump: Fix object-type sort priority for large objects.
Commit a45c78e328 moved large object metadata from SECTION_PRE_DATA
to SECTION_DATA but neglected to move PRIO_LARGE_OBJECT in
dbObjectTypePriorities accordingly.  While this hasn't produced any
known live bugs, it causes problems for a proposed patch that
optimizes upgrades with many large objects.  Fixing the priority
might also make the topological sort step marginally faster by
reducing the number of ordering violations that have to be fixed.

Reviewed-by: Nitin Motiani <nitinmotiani@google.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aBkQLSkx1zUJ-LwJ%40nathan
Discussion: https://postgr.es/m/aG_5DBCjdDX6KAoD%40nathan
Backpatch-through: 17
2025-07-10 15:52:41 -05:00
Michael Paquier
99fd638ba0 btree_gist: Merge the last two versions into version 1.8
During the development cycle of v18, btree_gist has been bumped once to
1.8 for the addition of translate_cmptype support functions (originally
7406ab623fee, renamed in 32edf732e8dc).  1.9 has added sortsupport
functions (e4309f73f698).

There is no need for two version bumps in a module for a single major
release of PostgreSQL.  This commit unifies both upgrades to a single
SQL script, downgrading btree_gist to 1.8.

Author: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/13c61807-f702-4afe-9a8d-795e2fd40923@illuminatedcomputing.com
Backpatch-through: 18
2025-07-10 12:23:30 +09:00
Tom Lane
7bd752c1fb Link libpq with libdl if the platform needs that.
Since b0635bfda, libpq uses dlopen() and related functions.  On some
platforms these are not supplied by libc, but by a separate library
libdl, in which case we need to make sure that that dependency is
known to the linker.  Meson seems to take care of that automatically,
but the Makefile didn't cater for it.

Author: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/1328170.1752082586@sss.pgh.pa.us
Backpatch-through: 18
2025-07-09 14:21:00 -04:00
Masahiko Sawada
765a4c94cc Fix tab-completion for COPY and \copy options.
Commit c273d9d8ce4 reworked tab-completion of COPY and \copy in psql
and added support for completing options within WITH clauses. However,
the same COPY options were suggested for both COPY TO and COPY FROM
commands, even though some options are only valid for one or the
other.

This commit separates the COPY options for COPY FROM and COPY TO
commands to provide more accurate auto-completion suggestions.

Back-patch to v14 where tab-completion for COPY and \copy options
within WITH clauses was first supported.

Author: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/079e7a2c801f252ae8d522b772790ed7@oss.nttdata.com
Backpatch-through: 14
2025-07-09 05:45:31 -07:00
Amit Kapila
5d9e675b36 Doc: Improve logical replication failover documentation.
Clarified that the failover steps apply to a specific PostgreSQL subscriber
and added guidance for verifying replication slot synchronization during
planned failover. Additionally, corrected the standby query to avoid false
positives by checking invalidation_reason IS NULL instead of conflicting.

Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Author: Shveta Malik <shveta.malik@gmail.com>
Backpatch-through: 17, where it was introduced
Discussion: https://www.postgresql.org/message-id/CAExHW5uiZ-fF159=jwBwPMbjZeZDtmcTbN+hd4mrURLCg2uzJg@mail.gmail.com
2025-07-09 09:59:40 +05:30
Michael Paquier
601a3133ae doc PG 18 relnotes: Remove item about PQservice()
This libpq API has been removed in fc3edb52fbb9, commit that has
forgotten one reference in the release notes.  This applies only to v18.
2025-07-09 13:23:13 +09:00
Michael Paquier
fc3edb52fb libpq: Remove PQservice()
This routine has been introduced as a shortcut to be able to retrieve a
service name from an active connection, for psql.  Per discussion, and
as it is only used by psql, let's remove it to not clutter the libpq API
more than necessary.

The logic in psql is replaced by lookups of PQconninfoOption for the
active connection, instead, updated each time the variables are synced
by psql, the prompt shortcut relying on the variable synced.

Reported-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20250706161319.c1.nmisch@google.com
Backpatch-through: 18
2025-07-09 12:46:18 +09:00
Tom Lane
075554ec6c Fix low-probability memory leak in XMLSERIALIZE(... INDENT).
xmltotext_with_options() did not consider the possibility that
pg_xml_init() could fail --- most likely due to OOM.  If that
happened, the already-parsed xmlDoc structure would be leaked.
Oversight in commit 483bdb2af.

Bug: #18981
Author: Dmitry Kovalenko <d.kovalenko@postgrespro.ru>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18981-9bc3c80f107ae925@postgresql.org
Backpatch-through: 16
2025-07-08 12:50:19 -04:00
Michael Paquier
330db576f8 pg_walsummary: Improve stability of test checking statistics
Per buildfarm member culicidae, the query checking for stats reported by
the WAL summarizer related to WAL reads is proving to be unstable.

Instead of a one-time query, this commit replaces the logic with a
polling query checking for the WAL read stats, making the test more
reliable on machines that could be slow with the stats reports.

This test has been introduced in f4694e0f35b2, so backpatch down to v18.

Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/f35ba3db-fca7-4693-bc35-6db64488e4b1@gmail.com
Backpatch-through: 18
2025-07-08 13:48:52 +09:00
Andres Freund
9a5334c0b4 aio: Combine io_uring memory mappings, if supported
By default io_uring creates a shared memory mapping for each io_uring
instance, leading to a large number of memory mappings. Unfortunately a large
number of memory mappings slows things down, backend exit is particularly
affected.  To address that, newer kernels (6.5) support using user-provided
memory for the memory. By putting the relevant memory into shared memory we
don't need any additional mappings.

On a system with a new enough kernel and liburing, there is no discernible
overhead when doing a pgbench -S -C anymore.

Reported-by: MARK CALLAGHAN <mdcallag@gmail.com>
Reviewed-by: "Burd, Greg" <greg@burd.me>
Reviewed-by: Jim Nasby <jnasby@upgrade.com>
Discussion: https://postgr.es/m/CAFbpF8OA44_UG+RYJcWH9WjF7E3GA6gka3gvH6nsrSnEe9H0NA@mail.gmail.com
Backpatch-through: 18
2025-07-07 21:04:03 -04:00
Jacob Champion
3a797c2491 oauth: Fix kqueue detection on OpenBSD
In b0635bfda, I added an early header check to the Meson OAuth support,
which was intended to duplicate the later checks for
HAVE_SYS_[EVENT|EPOLL]_H. However, I implemented the new test via
check_header() -- which tries to compile -- rather than has_header(),
which just looks for the file's existence.

The distinction matters on OpenBSD, where <sys/event.h> can't be
compiled without including prerequisite headers, so -Dlibcurl=enabled
failed on that platform. Switch to has_header() to fix this.

Note that reviewers expressed concern about the difference between our
Autoconf feature tests (which compile headers) and our Meson feature
tests (which do not). I'm not opposed to aligning the two, but I want to
avoid making bigger changes as part of this fix.

Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/flat/CAOYmi+kdR218ke2zu74oTJvzYJcqV1MN5=mGAPqZQuc79HMSVA@mail.gmail.com
Backpatch-through: 18
2025-07-07 11:58:04 -07:00
Tom Lane
440c5ee202 Restore the ability to run pl/pgsql expression queries in parallel.
pl/pgsql's notion of an "expression" is very broad, encompassing
any SQL SELECT query that returns a single column and no more than
one row.  So there are cases, for example evaluation of an aggregate
function, where the query involves significant work and it'd be useful
to run it with parallel workers.  This used to be possible, but
commits 3eea7a0c9 et al unintentionally disabled it.

The simplest fix is to make exec_eval_expr() pass maxtuples = 0
rather than 2 to exec_run_select().  This avoids the new rule that
we will never use parallelism when a nonzero "count" limit is passed
to ExecutorRun().  (Note that the pre-3eea7a0c9 behavior was indeed
unsafe, so reverting that rule is not in the cards.)  The reason
for passing 2 before was that exec_eval_expr() will throw an error
if it gets more than one returned row, so we figured that as soon
as we have two rows we know that will happen and we might as well
stop running the query.  That choice was cost-free when it was made;
but disabling parallelism is far from cost-free, so now passing 2
amounts to optimizing a failure case at the expense of useful cases.
An expression query that can return more than one row is certainly
broken.  People might now need to wait a bit longer to discover such
breakage; but hopefully few will use enormously expensive cases as
their first test of new pl/pgsql logic.

Author: Dipesh Dhameliya <dipeshdhameliya125@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CABgZEgdfbnq9t6xXJnmXbChNTcWFjeM_6nuig41tm327gYi2ig@mail.gmail.com
Backpatch-through: 13
2025-07-07 14:33:34 -04:00
Michael Paquier
8d1071e7da Fix incompatibility with libxml2 >= 2.14
libxml2 has deprecated the members of xmlBuffer, and it is recommended
to access them with dedicated routines.  We have only one case in the
tree where this shows an impact: xml2/xpath.c where "content" was
getting directly accessed.  The rest of the code looked fine, checking
the PostgreSQL code with libxml2 close to the top of its "2.14" branch.

xmlBufferContent() exists since year 2000 based on a check of the
upstream libxml2 tree, so let's switch to it.

Like 400928b83bd2, backpatch all the way down as this can have an impact
on all the branches already released once newer versions of libxml2 get
more popular.

Reported-by: Walid Ibrahim <walidib@amazon.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/aGdSdcR4QTjEHX6s@paquier.xyz
Backpatch-through: 13
2025-07-07 08:54:30 +09:00
Álvaro Herrera
1e007722fa
Fix new pg_upgrade query not to rely on regnamespace
That was invented in 9.5, and pg_upgrade claims to support back to 9.0.
But we don't need that with a simple query change, tested by Tom Lane.

Discussion: https://postgr.es/m/202507041645.afjl5rssvrgu@alvherre.pgsql
2025-07-04 21:30:05 +02:00
Álvaro Herrera
5aba3e637d
pg_upgrade: Add missing newline in error message
Minor oversight in 347758b12063
2025-07-04 18:31:24 +02:00
Álvaro Herrera
07da2985d6
pg_upgrade: check for inconsistencies in not-null constraints w/inheritance
With tables defined like this,
  CREATE TABLE ip (id int PRIMARY KEY);
  CREATE TABLE ic (id int) INHERITS (ip);
  ALTER TABLE ic ALTER id DROP NOT NULL;

pg_upgrade fails during the schema restore phase due to this error:
  ERROR: column "id" in child table must be marked NOT NULL

This can only be fixed by marking the child column as NOT NULL before
the upgrade, which could take an arbitrary amount of time (because ic's
data must be scanned).  Have pg_upgrade's check mode warn if that
condition is found, so that users know what to adjust before running the
upgrade for real.

Author: Ali Akbar <the.apaan@gmail.com>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Backpatch-through: 13
Discussion: https://postgr.es/m/CACQjQLoMsE+1pyLe98pi0KvPG2jQQ94LWJ+PTiLgVRK4B=i_jg@mail.gmail.com
2025-07-04 18:05:43 +02:00
Michael Paquier
29a4b63c6b Disable commit timestamps during bootstrap
Attempting to use commit timestamps during bootstrapping leads to an
assertion failure, that can be reached for example with an initdb -c
that enables track_commit_timestamp.  It makes little sense to register
a commit timestamp for a BootstrapTransactionId, so let's disable the
activation of the module in this case.

This problem has been independently reported once by each author of this
commit.  Each author has proposed basically the same patch, relying on
IsBootstrapProcessingMode() to skip the use of commit_ts during
bootstrap.  The test addition is a suggestion by me, and is applied down
to v16.

Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Author: Andy Fan <zhihuifan1213@163.com>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/OSCPR01MB14966FF9E4C4145F37B937E52F5102@OSCPR01MB14966.jpnprd01.prod.outlook.com
Discussion: https://postgr.es/m/87plejmnpy.fsf@163.com
Backpatch-through: 13
2025-07-04 15:10:17 +09:00
Tom Lane
3d7a96871c Obtain required table lock during cross-table updates, redux.
Commits 8319e5cb5 et al missed the fact that ATPostAlterTypeCleanup
contains three calls to ATPostAlterTypeParse, and the other two
also need protection against passing a relid that we don't yet
have lock on.  Add similar logic to those code paths, and add
some test cases demonstrating the need for it.

In v18 and master, the test cases demonstrate that there's a
behavioral discrepancy between stored generated columns and virtual
generated columns: we disallow changing the expression of a stored
column if it's used in any composite-type columns, but not that of
a virtual column.  Since the expression isn't actually relevant to
either sort of composite-type usage, this prohibition seems
unnecessary; but changing it is a matter for separate discussion.
For now we are just documenting the existing behavior.

Reported-by: jian he <jian.universality@gmail.com>
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: CACJufxGKJtGNRRSXfwMW9SqVOPEMdP17BJ7DsBf=tNsv9pWU9g@mail.gmail.com
Backpatch-through: 13
2025-07-03 13:46:07 -04:00
Fujii Masao
0cd7fcaa85 doc: Update outdated descriptions of wal_status in pg_replication_slots.
The documentation for pg_replication_slots previously mentioned only
max_slot_wal_keep_size as a condition under which the wal_status column
could show unreserved or lost. However, since commit be87200,
replication slots can also be invalidated due to horizon or wal_level,
and since commit ac0e33136ab, idle_replication_slot_timeout can also
trigger this state.

This commit updates the description of the wal_status column to
reflect that max_slot_wal_keep_size is not the only cause of the lost state.

Back-patched to v16, where the additional invalidation cases were introduced.

Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Discussion: https://postgr.es/m/78b34e84-2195-4f28-a151-5d204a382fdd@oss.nttdata.com
Backpatch-through: 16
2025-07-03 23:09:07 +09:00
Álvaro Herrera
8af310b331
Prevent creation of duplicate not-null constraints for domains
This was previously harmless, but now that we create pg_constraint rows
for those, duplicates are not welcome anymore.

Backpatch to 18.

Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Discussion: https://postgr.es/m/CACJufxFSC0mcQ82bSk58sO-WJY4P-o4N6RD2M0D=DD_u_6EzdQ@mail.gmail.com
2025-07-03 11:46:12 +02:00
Fujii Masao
f0151e2a4e doc: Remove incorrect note about wal_status in pg_replication_slots.
The documentation previously stated that the wal_status column is NULL
if restart_lsn is NULL in the pg_replication_slots view. This is incorrect,
and wal_status can be "lost" even when restart_lsn is NULL.

This commit removes the incorrect description.

Back-patched to all supported versions.

Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Discussion: https://postgr.es/m/c9d23cdc-b5dd-455a-8ee9-f1f24d701d89@oss.nttdata.com
Backpatch-through: 13
2025-07-03 16:04:59 +09:00
Tom Lane
5d0800000e Correctly copy the target host identification in PQcancelCreate.
PQcancelCreate failed to copy struct pg_conn_host's "type" field,
instead leaving it zero (a/k/a CHT_HOST_NAME).  This seemingly
has no great ill effects if it should have been CHT_UNIX_SOCKET
instead, but if it should have been CHT_HOST_ADDRESS then a
null-pointer dereference will occur when the cancelConn is used.

Bug: #18974
Reported-by: Maxim Boguk <maxim.boguk@gmail.com>
Author: Sergei Kornilov <sk@zsrv.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18974-575f02b2168b36b3@postgresql.org
Backpatch-through: 17
2025-07-02 15:48:03 -04:00
Peter Geoghegan
4938737d54 Update obsolete row compare preprocessing comments.
Restore nbtree preprocessing comments describing how we mark nbtree row
compare members required to how they were prior to 2016 bugfix commit
a298a1e0.

Oversight in commit bd3f59fd, which made nbtree preprocessing revert to
the original 2006 rules, but neglected to revert these comments.

Backpatch-through: 18
2025-07-02 12:36:34 -04:00
Álvaro Herrera
e16c9cd331
Fix error message for ALTER CONSTRAINT ... NOT VALID
Trying to alter a constraint so that it becomes NOT VALID results in an
error that assumes the constraint is a foreign key.  This is potentially
wrong, so give a more generic error message.

While at it, give CREATE CONSTRAINT TRIGGER a better error message as
well.

Co-authored-by: jian he <jian.universality@gmail.com>
Co-authored-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Co-authored-by: Amul Sul <sulamul@gmail.com>
Discussion: https://postgr.es/m/CACJufxHSp2puxP=q8ZtUGL1F+heapnzqFBZy5ZNGUjUgwjBqTQ@mail.gmail.com
2025-07-02 17:02:27 +02:00
Peter Geoghegan
4cb889d21f Make row compares robust during nbtree array scans.
Recent nbtree bugfix commit 5f4d98d4 added a special case to the code
that sets up a page-level prefix of keys that are definitely satisfied
by every tuple on the page: whenever _bt_set_startikey reached a row
compare key, we'd refuse to apply the pstate.forcenonrequired behavior
in scans where that usually happens (scans with a higher-order array
key).  That hack made the scan avoid essentially the same infinite
cycling behavior that also affected nbtree scans with redundant keys
(keys that preprocessing could not eliminate) prior to commit f09816a0.
There are now serious doubts about this row compare workaround.

Testing has shown that a scan with a row compare key and an array key
could still read the same leaf page twice (without the scan's direction
changing), which isn't supposed to be possible following the SAOP
enhancements added by Postgres 17 commit 5bf748b8.  Also, we still
allowed a required row compare key to be used with forcenonrequired mode
when its header key happened to be beyond the pstate.ikey set by
_bt_set_startikey, which was complicated and brittle.

The underlying problem was that row compares had inconsistent rules
around how scans start (which keys can be used for initial positioning
purposes) and how scans end (which keys can set continuescan=false).
Quals with redundant keys that could not be eliminated by preprocessing
also had that same quality to them prior to today's bugfix f09816a0.  It
now seems prudent to bring row compare keys in line with the new charter
for required keys, by making the start and end rules symmetric.

This commit fixes two points of disagreement between _bt_first and
_bt_check_rowcompare.  Firstly, _bt_check_rowcompare was capable of
ending the scan at the point where it needed to compare an ISNULL-marked
row compare member that came immediately after a required row compare
member.  _bt_first now has symmetric handling for NULL row compares.
Secondly, _bt_first had its own ideas about which keys were safe to use
for initial positioning purposes.  It could use fewer or more keys than
_bt_check_rowcompare.  _bt_first now uses the same requiredness markings
as _bt_check_rowcompare for this.

Now that _bt_first and _bt_check_rowcompare agree on how to start and
end scans, we can get rid of the forcenonrequired special case, without
any risk of infinite cycling.  This approach also makes row compare keys
behave more like regular scalar keys, particularly within _bt_first.

Fixing these inconsistencies necessitates dealing with a related issue
with the way that row compares were marked required by preprocessing: we
didn't mark any lower-order row members required following 2016 bugfix
commit a298a1e0.  That approach was over broad.  The bug in question was
actually an oversight in how _bt_check_rowcompare dealt with tuple NULL
values that failed to satisfy a scan key marked required in the opposite
scan direction (it was a bug in 2011 commits 6980f817 and 882368e8, not
a bug in 2006 commit 3a0a16cb).  Go back to marking row compare members
as required using the original 2006 rules, and fix the 2016 bug in a
more principled way: by limiting use of the "set continuescan=false with
a key required in the opposite scan direction upon encountering a NULL
tuple value" optimization to the first/most significant row member key.
While it isn't safe to use an implied IS NOT NULL qualifier to end the
scan when it comes from a required lower-order row compare member key,
it _is_ generally safe for such a required member key to end the scan --
provided the key is marked required in the _current_ scan direction.

This fixes what was arguably an oversight in either commit 5f4d98d4 or
commit 8a510275.  It is a direct follow-up to today's commit f09816a0.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wz=pcijHL_mA0_TJ5LiTB28QpQ0cGtT-ccFV=KzuunNDDQ@mail.gmail.com
Backpatch-through: 18
2025-07-02 09:48:14 -04:00
Peter Geoghegan
7c365eb504 Make handling of redundant nbtree keys more robust.
nbtree preprocessing's handling of redundant (and contradictory) keys
created problems for scans with = arrays.  It was just about possible
for a scan with an = array key and one or more redundant keys (keys that
preprocessing could not eliminate due an incomplete opfamily and a
cross-type key) to get stuck.  Testing has shown that infinite cycling
where the scan never manages to make forward progress was possible.
This could happen when the scan's arrays were reset in _bt_readpage's
forcenonrequired=true path (added by bugfix commit 5f4d98d4) when the
arrays weren't at least advanced up to the same point that they were in
at the start of the _bt_readpage call.  Earlier redundant keys prevented
the finaltup call to _bt_advance_array_keys from reaching lower-order
keys that needed to be used to sufficiently advance the scan's arrays.

To fix, make preprocessing leave the scan's keys in a state that is as
close as possible to how it'll usually leave them (in the common case
where there's no redundant keys that preprocessing failed to eliminate).
Now nbtree preprocessing _reliably_ leaves behind at most one required
>/>= key per index column, and at most one required </<= key per index
column.  Columns that have one or more = keys that are eligible to be
marked required (based on the traditional rules) prioritize the = keys
over redundant inequality keys; they'll _reliably_ be left with only one
of the = keys as the index column's only required key.

Keys that are not marked required (whether due to the new preprocessing
step running or for some other reason) are relocated to the end of the
so->keyData[] array as needed.  That way they'll always be evaluated
after the scan's required keys, and so cannot prevent code in places
like _bt_advance_array_keys and _bt_first from reaching a required key.

Also teach _bt_first to decide which initial positioning keys to use
based on the same requiredness markings that have long been used by
_bt_checkkeys/_bt_advance_array_keys.  This is a necessary condition for
reliably avoiding infinite cycling.  _bt_advance_array_keys expects to
be able to reason about what'll happen in the next _bt_first call should
it start another primitive index scan, by evaluating inequality keys
that were marked required in the opposite-to-scan scan direction only.
Now everybody (_bt_first, _bt_checkkeys, and _bt_advance_array_keys)
will always agree on which exact key will be used on each index column
to start and/or end the scan (except when row compare keys are involved,
which have similar problems not addressed by this commit).

An upcoming commit will finish off the work started by this commit by
harmonizing how _bt_first, _bt_checkkeys, and _bt_advance_array_keys
apply row compare keys to start and end scans.

This fixes what was arguably an oversight in either commit 5f4d98d4 or
commit 8a510275.

Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Discussion: https://postgr.es/m/CAH2-Wz=ds4M+3NXMgwxYxqU8MULaLf696_v5g=9WNmWL2=Uo2A@mail.gmail.com
Backpatch-through: 18
2025-07-02 09:40:48 -04:00
Daniel Gustafsson
87f0d3cd8d doc: pg_buffercache documentation wordsmithing
A words seemed to have gone missing in the leading paragraphs.

Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/aGTQYZz9L0bjlzVL@ip-10-97-1-34.eu-west-3.compute.internal
Backpatch-through: 18
2025-07-02 11:42:36 +02:00
Masahiko Sawada
7c6ededac8 Fix missing FSM vacuum opportunities on tables without indexes.
Commit c120550edb86 optimized the vacuuming of relations without
indexes (a.k.a. one-pass strategy) by directly marking dead item IDs
as LP_UNUSED. However, the periodic FSM vacuum was still checking if
dead item IDs had been marked as LP_DEAD when attempting to vacuum the
FSM every VACUUM_FSM_EVERY_PAGES blocks. This condition was never met
due to the optimization, resulting in missed FSM vacuum
opportunities.

This commit modifies the periodic FSM vacuum condition to use the
number of tuples deleted during HOT pruning. This count includes items
marked as either LP_UNUSED or LP_REDIRECT, both of which are expected
to result in new free space to report.

Back-patch to v17 where the vacuum optimization for tables with no
indexes was introduced.

Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/CAD21AoBL8m6B9GSzQfYxVaEgvD7-Kr3AJaS-hJPHC+avm-29zw@mail.gmail.com
Backpatch-through: 17
2025-07-01 23:25:17 -07:00
John Naylor
3e73d87353 Remove implicit cast from 'void *'
Commit e2809e3a101 added code to a header which assigns a pointer
to void to a pointer to unsigned char. This causes build errors for
extensions written in C++. Fix by adding an explicit cast.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CANWCAZaCq9AHBuhs%3DMx7Gg_0Af9oRU7iAqr0itJCtfmsWwVmnQ%40mail.gmail.com
Backpatch-through: 18
2025-07-02 11:51:53 +07:00
Michael Paquier
d09d137934 Fix bug in archive streamer with LZ4 decompression
When decompressing some input data, the calculation for the initial
starting point and the initial size were incorrect, potentially leading
to failures when decompressing contents with LZ4.  These initialization
points are fixed in this commit, bringing the logic closer to what
exists for gzip and zstd.

The contents of the compressed data is clear (for example backups taken
with LZ4 can still be decompressed with a "lz4" command), only the
decompression part reading the input data was impacted by this issue.

This code path impacts pg_basebackup and pg_verifybackup, which can use
the LZ4 decompression routines with an archive streamer, or any tools
that try to use the archive streamers in src/fe_utils/.

The issue is easier to reproduce with files that have a low-compression
rate, like ones filled with random data, for a size of at least 512kB,
but this could happen with anything as long as it is stored in a data
folder.  Some tests are added based on this idea, with a file filled
with random bytes grabbed from the backend, written at the root of the
data folder.  This is proving good enough to reproduce the original
problem.

Author: Mikhail Gribkov <youzhick@gmail.com>
Discussion: https://postgr.es/m/CAMEv5_uQS1Hg6KCaEP2JkrTBbZ-nXQhxomWrhYQvbdzR-zy-wA@mail.gmail.com
Backpatch-through: 15
2025-07-02 13:48:41 +09:00
Peter Eisentraut
b897a58556 Update comment for IndexInfo.ii_NullsNotDistinct
Commit 7a7b3e11e61 added the ii_NullsNotDistinct field, but the
comment was not updated.

Author: Japin Li <japinli@hotmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/ME0P300MB04453E6C7EA635F0ECF41BFCB6832%40ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
2025-07-01 23:13:01 +02:00
Nathan Bossart
3386b2fe7a Add commit 07448b3969 to .git-blame-ignore-revs. 2025-07-01 14:35:59 -05:00
Nathan Bossart
c8b9f75111 Document pg_get_multixact_members().
Oversight in commit 0ac5ad5134.

Author: Sami Imseih <samimseih@gmail.com>
Co-authored-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/20150619215231.GT133018%40postgresql.org
Discussion: https://postgr.es/m/CAA5RZ0sjQDDwJfMRb%3DZ13nDLuRpF13ME2L_BdGxi0op8RKjmDg%40mail.gmail.com
Backpatch-through: 13
2025-07-01 13:54:38 -05:00
Peter Eisentraut
399997d8cc Update comment for IndexInfo.ii_WithoutOverlaps
Commit fc0438b4e80 added the ii_WithoutOverlaps field, but the comment
was not updated.

Author: Japin Li <japinli@hotmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/ME0P300MB04453E6C7EA635F0ECF41BFCB6832%40ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
2025-07-01 20:39:20 +02:00
Peter Eisentraut
b71351e1f2 Fix outdated comment for IndexInfo
Commit 78416235713 removed the ii_OpclassOptions field, but the
comment was not updated.

Author: Japin Li <japinli@hotmail.com>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/ME0P300MB04453E6C7EA635F0ECF41BFCB6832%40ME0P300MB0445.AUSP300.PROD.OUTLOOK.COM
2025-07-01 20:17:38 +02:00
Tom Lane
581305a465 Make sure IOV_MAX is defined.
We stopped defining IOV_MAX on non-Windows systems in 75357ab94, on
the assumption that every non-Windows system defines it in <limits.h>
as required by X/Open.  GNU Hurd, however, doesn't follow that
standard either.  Put back the old logic to assume 16 if it's
not defined.

Author: Michael Banck <mbanck@gmx.net>
Co-authored-by: Christoph Berg <myon@debian.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6862e8d1.050a0220.194b8d.76fa@mx.google.com
Discussion: https://postgr.es/m/6846e0c3.df0a0220.39ef9b.c60e@mx.google.com
Backpatch-through: 16
2025-07-01 12:40:35 -04:00
Tom Lane
45c5276628 Make safeguard against incorrect flags for fsync more portable.
The existing code assumed that O_RDONLY is defined as 0, but this is
not required by POSIX and is not true on GNU Hurd.  We can avoid
the assumption by relying on O_ACCMODE to mask the fcntl() result.
(Hopefully, all supported platforms define that.)

Author: Michael Banck <mbanck@gmx.net>
Co-authored-by: Samuel Thibault
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/6862e8d1.050a0220.194b8d.76fa@mx.google.com
Discussion: https://postgr.es/m/68480868.5d0a0220.1e214d.68a6@mx.google.com
Backpatch-through: 13
2025-07-01 12:08:20 -04:00
Tomas Vondra
07448b3969 Fix indentation in pg_numa code
Broken by commits 7fe2f67c7c9f, 81f287dc923f and bf1119d74a79. Backpatch
to 18, same as the offending commits.

Backpatch-through: 18
2025-07-01 15:24:19 +02:00
Tomas Vondra
54ac4944c3 Add CHECK_FOR_INTERRUPTS into pg_numa_query_pages
Querying the NUMA status can be quite time consuming, especially with
large shared buffers. 8cc139bec34a called numa_move_pages() once, for
all buffers, and we had to wait for the syscall to complete.

But with the chunking, introduced by 7fe2f67c7c to work around a kernel
bug, we can do CHECK_FOR_INTERRUPTS() after each chunk, allowing users
to abort the execution.

Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aEtDozLmtZddARdB@msg.df7cb.de
Backpatch-through: 18
2025-07-01 12:59:03 +02:00
Tomas Vondra
14e52227e5 Silence valgrind about pg_numa_touch_mem_if_required
When querying NUMA status of pages in shared memory, we need to touch
the memory first to get valid results. This may trigger valgrind
reports, because some of the memory (e.g. unpinned buffers) may be
marked as noaccess.

Solved by adding a valgrind suppresion. An alternative would be to
adjust the access/noaccess status before touching the memory, but that
seems far too invasive. It would require all those places to have
detailed knowledge of what the shared memory stores.

The pg_numa_touch_mem_if_required() macro is replaced with a function.
Macros are invisible to suppressions, so it'd have to suppress reports
for the caller - e.g. pg_get_shmem_allocations_numa(). So we'd suppress
reports for the whole function, and that seems to heavy-handed. It might
easily hide other valid issues.

Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aEtDozLmtZddARdB@msg.df7cb.de
Backpatch-through: 18
2025-07-01 12:33:29 +02:00
Tomas Vondra
45879f48f1 Limit the size of numa_move_pages requests
There's a kernel bug in do_pages_stat(), affecting systems combining
64-bit kernel and 32-bit user space. The function splits the request
into chunks of 16 pointers, but forgets the pointers are 32-bit when
advancing to the next chunk. Some of the pointers get skipped, and
memory after the array is interpreted as pointers. The result is that
the produced status of memory pages is mostly bogus.

Systems combining 64-bit and 32-bit environments like this might seem
rare, but that's not the case - all 32-bit Debian packages are built in
a 32-bit chroot on a system with a 64-bit kernel.

This is a long-standing kernel bug (since 2010), affecting pretty much
all kernels, so it'll take time until all systems get a fixed kernel.
Luckily, we can work around the issue by chunking the requests the same
way do_pages_stat() does, at least on affected systems. We don't know
what kernel a 32-bit build will run on, so all 32-bit builds use chunks
of 16 elements (the largest chunk before hitting the issue).

64-bit builds are not affected by this issue, and so could work without
the chunking. But chunking has other advantages, so we apply chunking
even for 64-bit builds, with chunks of 1024 elements.

Reported-by: Christoph Berg <myon@debian.org>
Author: Christoph Berg <myon@debian.org>
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/aEtDozLmtZddARdB@msg.df7cb.de
Context: https://marc.info/?l=linux-mm&m=175077821909222&w=2
Backpatch-through: 18
2025-07-01 12:03:08 +02:00
Amit Langote
eb37fe716a Fix typos in comments
Commit 19d8e2308bc added enum values with the prefix TU_, but a few
comments still referred to TUUI_, which was used in development
versions of the patches committed as 19d8e2308bc.

Author: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/20250701110216.8ac8a9e4c6f607f1d954f44a@sraoss.co.jp
Backpatch-through: 16
2025-07-01 13:13:40 +09:00
Andres Freund
95163cbe11 aio: Fix reference to outdated name
Reported-by: Antonin Houska <ah@cybertec.at>
Author: Antonin Houska <ah@cybertec.at>
Discussion: https://postgr.es/m/5250.1751266701@localhost
Backpatch-through: 18, where da7226993fd4 introduced this
2025-06-30 10:21:49 -04:00
Daniel Gustafsson
b2a57747ba doc: Fix typo in pg_sync_replication_slots documentation
Commit 1546e17f9d0 accidentally misspelled additionally as
additionaly.  Backpatch to v17 to match where the original
commit was backpatched.

Author: Daniel Gustafsson <daniel@yesql.se>
Backpatch-through: 17
2025-06-30 10:12:31 +02:00
Joe Conway
42625ecda2 Adapt REL_18_STABLE to its new status as a stable branch
Per the checklist in RELEASE_CHANGES for the creation of a new stable
branch, this commit does the following things:
- Arm gen_node_support.pl's nodetag ABI stability, based on the contents
of nodetags.h.
- Update URLs of top-level README and Makefile to point to the new
stable version.
2025-06-29 23:00:00 -04:00
1174 changed files with 102001 additions and 108447 deletions

View File

@ -7,7 +7,7 @@ https://github.com/bazelbuild/starlark/blob/master/spec.md
See also .cirrus.yml and src/tools/ci/README
"""
load("cirrus", "env", "fs", "re", "yaml")
load("cirrus", "env", "fs")
def main():
@ -18,36 +18,19 @@ def main():
1) the contents of .cirrus.yml
2) computed environment variables
3) if defined, the contents of the file referenced by the, repository
2) if defined, the contents of the file referenced by the, repository
level, REPO_CI_CONFIG_GIT_URL variable (see
https://cirrus-ci.org/guide/programming-tasks/#fs for the accepted
format)
4) .cirrus.tasks.yml
3) .cirrus.tasks.yml
"""
output = ""
# 1) is evaluated implicitly
# Add 2)
additional_env = compute_environment_vars()
env_fmt = """
###
# Computed environment variables start here
###
{0}
###
# Computed environment variables end here
###
"""
output += env_fmt.format(yaml.dumps({'env': additional_env}))
# Add 3)
repo_config_url = env.get("REPO_CI_CONFIG_GIT_URL")
if repo_config_url != None:
print("loading additional configuration from \"{}\"".format(repo_config_url))
@ -55,75 +38,12 @@ def main():
else:
output += "\n# REPO_CI_CONFIG_URL was not set\n"
# Add 4)
# Add 3)
output += config_from(".cirrus.tasks.yml")
return output
def compute_environment_vars():
cenv = {}
###
# Some tasks are manually triggered by default because they might use too
# many resources for users of free Cirrus credits, but they can be
# triggered automatically by naming them in an environment variable e.g.
# REPO_CI_AUTOMATIC_TRIGGER_TASKS="task_name other_task" under "Repository
# Settings" on Cirrus CI's website.
default_manual_trigger_tasks = ['mingw', 'netbsd', 'openbsd']
repo_ci_automatic_trigger_tasks = env.get('REPO_CI_AUTOMATIC_TRIGGER_TASKS', '')
for task in default_manual_trigger_tasks:
name = 'CI_TRIGGER_TYPE_' + task.upper()
if repo_ci_automatic_trigger_tasks.find(task) != -1:
value = 'automatic'
else:
value = 'manual'
cenv[name] = value
###
###
# Parse "ci-os-only:" tag in commit message and set
# CI_{$OS}_ENABLED variable for each OS
# We want to disable SanityCheck if testing just a specific OS. This
# shortens push-wait-for-ci cycle time a bit when debugging operating
# system specific failures. Just treating it as an OS in that case
# suffices.
operating_systems = [
'compilerwarnings',
'freebsd',
'linux',
'macos',
'mingw',
'netbsd',
'openbsd',
'sanitycheck',
'windows',
]
commit_message = env.get('CIRRUS_CHANGE_MESSAGE')
match_re = r"(^|.*\n)ci-os-only: ([^\n]+)($|\n.*)"
# re.match() returns an array with a tuple of (matched-string, match_1, ...)
m = re.match(match_re, commit_message)
if m and len(m) > 0:
os_only = m[0][2]
os_only_list = re.split(r'[, ]+', os_only)
else:
os_only_list = operating_systems
for os in operating_systems:
os_enabled = os in os_only_list
cenv['CI_{0}_ENABLED'.format(os.upper())] = os_enabled
###
return cenv
def config_from(config_src):
"""return contents of config file `config_src`, surrounded by markers
indicating start / end of the included file

View File

@ -31,31 +31,6 @@ env:
TEMP_CONFIG: ${CIRRUS_WORKING_DIR}/src/tools/ci/pg_ci_base.conf
PG_TEST_EXTRA: kerberos ldap ssl libpq_encryption load_balance oauth
# Postgres config args for the meson builds, shared between all meson tasks
# except the 'SanityCheck' task
MESON_COMMON_PG_CONFIG_ARGS: -Dcassert=true -Dinjection_points=true
# Meson feature flags shared by all meson tasks, except:
# SanityCheck: uses almost no dependencies.
# Windows - VS: has fewer dependencies than listed here, so defines its own.
# Linux: uses the 'auto' feature option to test meson feature autodetection.
MESON_COMMON_FEATURES: >-
-Dauto_features=disabled
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
-Dplperl=enabled
-Dplpython=enabled
-Ddocs=enabled
-Dicu=enabled
-Dlibxml=enabled
-Dlibxslt=enabled
-Dlz4=enabled
-Dpltcl=enabled
-Dreadline=enabled
-Dzlib=enabled
-Dzstd=enabled
# What files to preserve in case tests fail
on_failure_ac: &on_failure_ac
@ -97,7 +72,7 @@ task:
# push-wait-for-ci cycle time a bit when debugging operating system specific
# failures. Uses skip instead of only_if, as cirrus otherwise warns about
# only_if conditions not matching.
skip: $CI_SANITYCHECK_ENABLED == false
skip: $CIRRUS_CHANGE_MESSAGE =~ '.*\nci-os-only:.*'
env:
CPUS: 4
@ -189,19 +164,10 @@ task:
-c debug_parallel_query=regress
PG_TEST_PG_UPGRADE_MODE: --link
MESON_FEATURES: >-
-Ddtrace=enabled
-Dgssapi=enabled
-Dlibcurl=enabled
-Dnls=enabled
-Dpam=enabled
-Dtcl_version=tcl86
-Duuid=bsd
<<: *freebsd_task_template
depends_on: SanityCheck
only_if: $CI_FREEBSD_ENABLED
only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\nci-os-only:[^\n]*freebsd.*'
sysinfo_script: |
id
@ -230,10 +196,10 @@ task:
configure_script: |
su postgres <<-EOF
meson setup \
${MESON_COMMON_PG_CONFIG_ARGS} \
--buildtype=debug \
-Dcassert=true -Dinjection_points=true \
-Duuid=bsd -Dtcl_version=tcl86 -Ddtrace=auto \
-Dextra_lib_dirs=/usr/local/lib -Dextra_include_dirs=/usr/local/include/ \
${MESON_COMMON_FEATURES} ${MESON_FEATURES} \
build
EOF
build_script: su postgres -c 'ninja -C build -j${BUILD_JOBS} ${MBUILD_TARGET}'
@ -273,6 +239,7 @@ task:
task:
depends_on: SanityCheck
trigger_type: manual
env:
# Below are experimentally derived to be a decent choice.
@ -290,9 +257,7 @@ task:
matrix:
- name: NetBSD - Meson
# See REPO_CI_AUTOMATIC_TRIGGER_TASKS in .cirrus.star
trigger_type: $CI_TRIGGER_TYPE_NETBSD
only_if: $CI_NETBSD_ENABLED
only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\nci-os-only:[^\n]*netbsd.*'
env:
OS_NAME: netbsd
IMAGE_FAMILY: pg-ci-netbsd-postgres
@ -304,31 +269,18 @@ task:
LC_ALL: "C"
# -Duuid is not set for the NetBSD, see the comment below, above
# configure_script, for more information.
MESON_FEATURES: >-
-Dgssapi=enabled
-Dlibcurl=enabled
-Dnls=enabled
-Dpam=enabled
setup_additional_packages_script: |
#pkgin -y install ...
<<: *netbsd_task_template
- name: OpenBSD - Meson
# See REPO_CI_AUTOMATIC_TRIGGER_TASKS in .cirrus.star
trigger_type: $CI_TRIGGER_TYPE_OPENBSD
only_if: $CI_OPENBSD_ENABLED
only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\nci-os-only:[^\n]*openbsd.*'
env:
OS_NAME: openbsd
IMAGE_FAMILY: pg-ci-openbsd-postgres
PKGCONFIG_PATH: '/usr/lib/pkgconfig:/usr/local/lib/pkgconfig'
MESON_FEATURES: >-
-Dbsd_auth=enabled
-Dlibcurl=enabled
-Dtcl_version=tcl86
-Duuid=e2fs
UUID: -Duuid=e2fs
TCL: -Dtcl_version=tcl86
setup_additional_packages_script: |
#pkg_add -I ...
# Always core dump to ${CORE_DUMP_DIR}
@ -362,10 +314,11 @@ task:
configure_script: |
su postgres <<-EOF
meson setup \
${MESON_COMMON_PG_CONFIG_ARGS} \
--buildtype=debugoptimized \
--pkg-config-path ${PKGCONFIG_PATH} \
${MESON_COMMON_FEATURES} ${MESON_FEATURES} \
-Dcassert=true -Dinjection_points=true \
-Dssl=openssl ${UUID} ${TCL} \
-DPG_TEST_EXTRA="$PG_TEST_EXTRA" \
build
EOF
@ -412,6 +365,10 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-
--with-uuid=ossp
--with-zstd
LINUX_MESON_FEATURES: &LINUX_MESON_FEATURES >-
-Dllvm=enabled
-Duuid=e2fs
# Check SPECIAL in the matrix: below
task:
@ -452,13 +409,12 @@ task:
LLVM_CONFIG: llvm-config-16
LINUX_CONFIGURE_FEATURES: *LINUX_CONFIGURE_FEATURES
LINUX_MESON_FEATURES: >-
-Duuid=e2fs
LINUX_MESON_FEATURES: *LINUX_MESON_FEATURES
<<: *linux_task_template
depends_on: SanityCheck
only_if: $CI_LINUX_ENABLED
only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\nci-os-only:[^\n]*linux.*'
ccache_cache:
folder: ${CCACHE_DIR}
@ -539,7 +495,6 @@ task:
# are typically printed in the server log
# - Test both 64bit and 32 bit builds
# - uses io_method=io_uring
# - Uses meson feature autodetection
- name: Linux - Debian Bookworm - Meson
env:
@ -551,9 +506,9 @@ task:
configure_script: |
su postgres <<-EOF
meson setup \
${MESON_COMMON_PG_CONFIG_ARGS} \
--buildtype=debug \
${LINUX_MESON_FEATURES} -Dllvm=enabled \
-Dcassert=true -Dinjection_points=true \
${LINUX_MESON_FEATURES} \
build
EOF
@ -563,11 +518,13 @@ task:
su postgres <<-EOF
export CC='ccache gcc -m32'
meson setup \
${MESON_COMMON_PG_CONFIG_ARGS} \
--buildtype=debug \
-Dcassert=true -Dinjection_points=true \
${LINUX_MESON_FEATURES} \
-Dllvm=disabled \
--pkg-config-path /usr/lib/i386-linux-gnu/pkgconfig/ \
-DPERL=perl5.36-i386-linux-gnu \
${LINUX_MESON_FEATURES} -Dlibnuma=disabled \
-Dlibnuma=disabled \
build-32
EOF
@ -631,14 +588,6 @@ task:
CCACHE_DIR: ${HOME}/ccache
MACPORTS_CACHE: ${HOME}/macports-cache
MESON_FEATURES: >-
-Dbonjour=enabled
-Ddtrace=enabled
-Dgssapi=enabled
-Dlibcurl=enabled
-Dnls=enabled
-Duuid=e2fs
MACOS_PACKAGE_LIST: >-
ccache
icu
@ -664,7 +613,7 @@ task:
<<: *macos_task_template
depends_on: SanityCheck
only_if: $CI_MACOS_ENABLED
only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\nci-os-only:[^\n]*(macos|darwin|osx).*'
sysinfo_script: |
id
@ -708,11 +657,11 @@ task:
configure_script: |
export PKG_CONFIG_PATH="/opt/local/lib/pkgconfig/"
meson setup \
${MESON_COMMON_PG_CONFIG_ARGS} \
--buildtype=debug \
-Dextra_include_dirs=/opt/local/include \
-Dextra_lib_dirs=/opt/local/lib \
${MESON_COMMON_FEATURES} ${MESON_FEATURES} \
-Dcassert=true -Dinjection_points=true \
-Duuid=e2fs -Ddtrace=auto \
build
build_script: ninja -C build -j${BUILD_JOBS} ${MBUILD_TARGET}
@ -767,18 +716,10 @@ task:
# 0x8001 is SEM_FAILCRITICALERRORS | SEM_NOOPENFILEERRORBOX
CIRRUS_WINDOWS_ERROR_MODE: 0x8001
MESON_FEATURES:
-Dauto_features=disabled
-Dldap=enabled
-Dssl=openssl
-Dtap_tests=enabled
-Dplperl=enabled
-Dplpython=enabled
<<: *windows_task_template
depends_on: SanityCheck
only_if: $CI_WINDOWS_ENABLED
only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\nci-os-only:[^\n]*windows.*'
setup_additional_packages_script: |
REM choco install -y --no-progress ...
@ -789,9 +730,10 @@ task:
echo 127.0.0.3 pg-loadbalancetest >> c:\Windows\System32\Drivers\etc\hosts
type c:\Windows\System32\Drivers\etc\hosts
# Use /DEBUG:FASTLINK to avoid high memory usage during linking
configure_script: |
vcvarsall x64
meson setup --backend ninja %MESON_COMMON_PG_CONFIG_ARGS% --buildtype debug -Db_pch=true -Dextra_lib_dirs=c:\openssl\1.1\lib -Dextra_include_dirs=c:\openssl\1.1\include -DTAR=%TAR% %MESON_FEATURES% build
meson setup --backend ninja --buildtype debug -Dc_link_args=/DEBUG:FASTLINK -Dcassert=true -Dinjection_points=true -Db_pch=true -Dextra_lib_dirs=c:\openssl\1.1\lib -Dextra_include_dirs=c:\openssl\1.1\include -DTAR=%TAR% build
build_script: |
vcvarsall x64
@ -813,11 +755,13 @@ task:
<< : *WINDOWS_ENVIRONMENT_BASE
name: Windows - Server 2019, MinGW64 - Meson
# See REPO_CI_AUTOMATIC_TRIGGER_TASKS in .cirrus.star.
trigger_type: $CI_TRIGGER_TYPE_MINGW
# due to resource constraints we don't run this task by default for now
trigger_type: manual
# worth using only_if despite being manual, otherwise this task will show up
# when e.g. ci-os-only: linux is used.
only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\nci-os-only:[^\n]*mingw.*'
# otherwise it'll be sorted before other tasks
depends_on: SanityCheck
only_if: $CI_MINGW_ENABLED
env:
TEST_JOBS: 4 # higher concurrency causes occasional failures
@ -833,11 +777,6 @@ task:
CHERE_INVOKING: 1
BASH: C:\msys64\usr\bin\bash.exe -l
# Keep -Dnls explicitly disabled, as the number of files it creates causes a
# noticeable slowdown.
MESON_FEATURES: >-
-Dnls=disabled
<<: *windows_task_template
ccache_cache:
@ -852,8 +791,9 @@ task:
%BASH% -c "where perl"
%BASH% -c "perl --version"
# disable -Dnls as the number of files it creates cause a noticable slowdown
configure_script: |
%BASH% -c "meson setup %MESON_COMMON_PG_CONFIG_ARGS% -Ddebug=true -Doptimization=g -Db_pch=true %MESON_COMMON_FEATURES% %MESON_FEATURES% -DTAR=%TAR% build"
%BASH% -c "meson setup -Ddebug=true -Doptimization=g -Dcassert=true -Dinjection_points=true -Db_pch=true -Dnls=disabled -DTAR=%TAR% build"
build_script: |
%BASH% -c "ninja -C build ${MBUILD_TARGET}"
@ -875,9 +815,10 @@ task:
# To limit unnecessary work only run this once the SanityCheck
# succeeds. This is particularly important for this task as we intentionally
# use always: to continue after failures.
# use always: to continue after failures. Task that did not run count as a
# success, so we need to recheck SanityChecks's condition here ...
depends_on: SanityCheck
only_if: $CI_COMPILERWARNINGS_ENABLED
only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\nci-os-only:.*'
env:
CPUS: 4
@ -890,6 +831,7 @@ task:
CCACHE_DIR: "/tmp/ccache_dir"
LINUX_CONFIGURE_FEATURES: *LINUX_CONFIGURE_FEATURES
LINUX_MESON_FEATURES: *LINUX_MESON_FEATURES
# GCC emits a warning for llvm-14, so switch to a newer one.
LLVM_CONFIG: llvm-config-16

View File

@ -10,20 +10,12 @@
#
# 1) the contents of this file
#
# 2) computed environment variables
#
# Used to enable/disable tasks based on the execution environment. See
# .cirrus.star: compute_environment_vars()
#
# 3) if defined, the contents of the file referenced by the, repository
# 2) if defined, the contents of the file referenced by the, repository
# level, REPO_CI_CONFIG_GIT_URL variable (see
# https://cirrus-ci.org/guide/programming-tasks/#fs for the accepted
# format)
#
# This allows running tasks in a different execution environment than the
# default, e.g. to have sufficient resources for cfbot.
#
# 4) .cirrus.tasks.yml
# 3) .cirrus.tasks.yml
#
# This composition is done by .cirrus.star

View File

@ -14,16 +14,7 @@
#
# $ git log --pretty=format:"%H # %cd%n# %s" $PGINDENTGITHASH -1 --date=iso
7e9c216b5236cc61f677787b35e8c8f28f5f6959 # 2025-09-13 14:50:02 -0500
# Re-pgindent nbtpreprocesskeys.c after commit 796962922e.
1d1612aec7688139e1a5506df1366b4b6a69605d # 2025-07-29 09:10:41 -0400
# Run pgindent.
73873805fb3627cb23937c750fa83ffd8f16fc6c # 2025-07-25 16:36:44 -0400
# Run pgindent on the changes of the previous patch.
9e345415bcd3c4358350b89edfd710469b8bfaf9 # 2025-07-01 15:23:07 +0200
07448b3969d55a2081cdafafc23f68df3392f220 # 2025-07-01 15:24:19 +0200
# Fix indentation in pg_numa code
b27644bade0348d0dafd3036c47880a349fe9332 # 2025-06-15 13:04:24 -0400

4
.gitattributes vendored
View File

@ -12,8 +12,8 @@
*.xsl whitespace=space-before-tab,trailing-space,tab-in-indent
# Avoid confusing ASCII underlines with leftover merge conflict markers
README conflict-marker-size=48
README.* conflict-marker-size=48
README conflict-marker-size=32
README.* conflict-marker-size=32
# Certain data files that contain special whitespace, and other special cases
*.data -whitespace

View File

@ -20,7 +20,7 @@ all:
all check install installdirs installcheck installcheck-parallel uninstall clean distclean maintainer-clean dist distcheck world check-world install-world installcheck-world:
@if [ ! -f GNUmakefile ] ; then \
echo "You need to run the 'configure' program first. Please see"; \
echo "<https://www.postgresql.org/docs/devel/installation.html>" ; \
echo "<https://www.postgresql.org/docs/18/installation.html>" ; \
false ; \
fi
@IFS=':' ; \

View File

@ -12,9 +12,9 @@ and functions. This distribution also contains C language bindings.
Copyright and license information can be found in the file COPYRIGHT.
General documentation about this version of PostgreSQL can be found at
<https://www.postgresql.org/docs/devel/>. In particular, information
<https://www.postgresql.org/docs/18/>. In particular, information
about building PostgreSQL from the source code can be found at
<https://www.postgresql.org/docs/devel/installation.html>.
<https://www.postgresql.org/docs/18/installation.html>.
The latest version of this software, and related software, may be
obtained at <https://www.postgresql.org/download/>. For more information

View File

@ -83,7 +83,7 @@ if test x"$pgac_cv__128bit_int" = xyes ; then
AC_CACHE_CHECK([for __int128 alignment bug], [pgac_cv__128bit_int_bug],
[AC_RUN_IFELSE([AC_LANG_PROGRAM([
/* This must match the corresponding code in c.h: */
#if defined(__GNUC__)
#if defined(__GNUC__) || defined(__SUNPRO_C)
#define pg_attribute_aligned(a) __attribute__((aligned(a)))
#elif defined(_MSC_VER)
#define pg_attribute_aligned(a) __declspec(align(a))

View File

@ -22,14 +22,18 @@ sourcetree=`cd $1 && pwd`
buildtree=`cd ${2:-'.'} && pwd`
for item in `find "$sourcetree"/config "$sourcetree"/contrib "$sourcetree"/doc "$sourcetree"/src -type d -print`; do
# We must not auto-create the subdirectories holding built documentation.
# If we did, it would interfere with installation of prebuilt docs from
# the source tree, if a VPATH build is done from a distribution tarball.
# See bug #5595.
for item in `find "$sourcetree" -type d \( \( -name CVS -prune \) -o \( -name .git -prune \) -o -print \) | grep -v "$sourcetree/doc/src/sgml/\+"`; do
subdir=`expr "$item" : "$sourcetree\(.*\)"`
if test ! -d "$buildtree/$subdir"; then
mkdir -p "$buildtree/$subdir" || exit 1
fi
done
for item in "$sourcetree"/Makefile `find "$sourcetree"/config "$sourcetree"/contrib "$sourcetree"/doc "$sourcetree"/src -name Makefile -print -o -name GNUmakefile -print`; do
for item in `find "$sourcetree" -name Makefile -print -o -name GNUmakefile -print | grep -v "$sourcetree/doc/src/sgml/images/"`; do
filename=`expr "$item" : "$sourcetree\(.*\)"`
if test ! -f "${item}.in"; then
if cmp "$item" "$buildtree/$filename" >/dev/null 2>&1; then : ; else

275
configure vendored
View File

@ -1,6 +1,6 @@
#! /bin/sh
# Guess values for system-dependent variables and create Makefiles.
# Generated by GNU Autoconf 2.69 for PostgreSQL 19devel.
# Generated by GNU Autoconf 2.69 for PostgreSQL 18beta3.
#
# Report bugs to <pgsql-bugs@lists.postgresql.org>.
#
@ -582,8 +582,8 @@ MAKEFLAGS=
# Identity of this package.
PACKAGE_NAME='PostgreSQL'
PACKAGE_TARNAME='postgresql'
PACKAGE_VERSION='19devel'
PACKAGE_STRING='PostgreSQL 19devel'
PACKAGE_VERSION='18beta3'
PACKAGE_STRING='PostgreSQL 18beta3'
PACKAGE_BUGREPORT='pgsql-bugs@lists.postgresql.org'
PACKAGE_URL='https://www.postgresql.org/'
@ -739,6 +739,7 @@ PKG_CONFIG_LIBDIR
PKG_CONFIG_PATH
PKG_CONFIG
DLSUFFIX
TAS
GCC
CPP
CFLAGS_SL
@ -759,6 +760,7 @@ CLANG
LLVM_CONFIG
AWK
with_llvm
SUN_STUDIO_CC
ac_ct_CXX
CXXFLAGS
CXX
@ -1466,7 +1468,7 @@ if test "$ac_init_help" = "long"; then
# Omit some internal or obsolete options to make the list less imposing.
# This message is too long to be a string in the A/UX 3.1 sh.
cat <<_ACEOF
\`configure' configures PostgreSQL 19devel to adapt to many kinds of systems.
\`configure' configures PostgreSQL 18beta3 to adapt to many kinds of systems.
Usage: $0 [OPTION]... [VAR=VALUE]...
@ -1531,7 +1533,7 @@ fi
if test -n "$ac_init_help"; then
case $ac_init_help in
short | recursive ) echo "Configuration of PostgreSQL 19devel:";;
short | recursive ) echo "Configuration of PostgreSQL 18beta3:";;
esac
cat <<\_ACEOF
@ -1722,7 +1724,7 @@ fi
test -n "$ac_init_help" && exit $ac_status
if $ac_init_version; then
cat <<\_ACEOF
PostgreSQL configure 19devel
PostgreSQL configure 18beta3
generated by GNU Autoconf 2.69
Copyright (C) 2012 Free Software Foundation, Inc.
@ -2475,7 +2477,7 @@ cat >config.log <<_ACEOF
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.
It was created by PostgreSQL $as_me 19devel, which was
It was created by PostgreSQL $as_me 18beta3, which was
generated by GNU Autoconf 2.69. Invocation command line was
$ $0 $@
@ -3057,6 +3059,12 @@ $as_echo "$template" >&6; }
PORTNAME=$template
# Initialize default assumption that we do not need separate assembly code
# for TAS (test-and-set). This can be overridden by the template file
# when it's executed.
need_tas=no
tas_file=dummy.s
# Default, works for most platforms, override in template file if needed
DLSUFFIX=".so"
@ -4467,48 +4475,189 @@ ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
ac_compiler_gnu=$ac_cv_c_compiler_gnu
# Detect option needed for C11
# loosely modeled after code in later Autoconf versions
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $CC option to accept ISO C11" >&5
$as_echo_n "checking for $CC option to accept ISO C11... " >&6; }
if ${pgac_cv_prog_cc_c11+:} false; then :
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $CC option to accept ISO C99" >&5
$as_echo_n "checking for $CC option to accept ISO C99... " >&6; }
if ${ac_cv_prog_cc_c99+:} false; then :
$as_echo_n "(cached) " >&6
else
pgac_cv_prog_cc_c11=no
pgac_save_CC=$CC
for pgac_arg in '' '-std=gnu11' '-std=c11'; do
CC="$pgac_save_CC $pgac_arg"
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
ac_cv_prog_cc_c99=no
ac_save_CC=$CC
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
#if !defined __STDC_VERSION__ || __STDC_VERSION__ < 201112L
# error "Compiler does not advertise C11 conformance"
#include <stdarg.h>
#include <stdbool.h>
#include <stdlib.h>
#include <wchar.h>
#include <stdio.h>
// Check varargs macros. These examples are taken from C99 6.10.3.5.
#define debug(...) fprintf (stderr, __VA_ARGS__)
#define showlist(...) puts (#__VA_ARGS__)
#define report(test,...) ((test) ? puts (#test) : printf (__VA_ARGS__))
static void
test_varargs_macros (void)
{
int x = 1234;
int y = 5678;
debug ("Flag");
debug ("X = %d\n", x);
showlist (The first, second, and third items.);
report (x>y, "x is %d but y is %d", x, y);
}
// Check long long types.
#define BIG64 18446744073709551615ull
#define BIG32 4294967295ul
#define BIG_OK (BIG64 / BIG32 == 4294967297ull && BIG64 % BIG32 == 0)
#if !BIG_OK
your preprocessor is broken;
#endif
#if BIG_OK
#else
your preprocessor is broken;
#endif
static long long int bignum = -9223372036854775807LL;
static unsigned long long int ubignum = BIG64;
struct incomplete_array
{
int datasize;
double data[];
};
struct named_init {
int number;
const wchar_t *name;
double average;
};
typedef const char *ccp;
static inline int
test_restrict (ccp restrict text)
{
// See if C++-style comments work.
// Iterate through items via the restricted pointer.
// Also check for declarations in for loops.
for (unsigned int i = 0; *(text+i) != '\0'; ++i)
continue;
return 0;
}
// Check varargs and va_copy.
static void
test_varargs (const char *format, ...)
{
va_list args;
va_start (args, format);
va_list args_copy;
va_copy (args_copy, args);
const char *str;
int number;
float fnumber;
while (*format)
{
switch (*format++)
{
case 's': // string
str = va_arg (args_copy, const char *);
break;
case 'd': // int
number = va_arg (args_copy, int);
break;
case 'f': // float
fnumber = va_arg (args_copy, double);
break;
default:
break;
}
}
va_end (args_copy);
va_end (args);
}
int
main ()
{
// Check bool.
_Bool success = false;
// Check restrict.
if (test_restrict ("String literal") == 0)
success = true;
char *restrict newvar = "Another string";
// Check varargs.
test_varargs ("s, d' f .", "string", 65, 34.234);
test_varargs_macros ();
// Check flexible array members.
struct incomplete_array *ia =
malloc (sizeof (struct incomplete_array) + (sizeof (double) * 10));
ia->datasize = 10;
for (int i = 0; i < ia->datasize; ++i)
ia->data[i] = i * 1.234;
// Check named initializers.
struct named_init ni = {
.number = 34,
.name = L"Test wide string",
.average = 543.34343,
};
ni.number = 58;
int dynamic_array[ni.number];
dynamic_array[ni.number - 1] = 543;
// work around unused variable warnings
return (!success || bignum == 0LL || ubignum == 0uLL || newvar[0] == 'x'
|| dynamic_array[ni.number - 1] != 543);
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
pgac_cv_prog_cc_c11=$pgac_arg
for ac_arg in '' -std=gnu99 -std=c99 -c99 -AC99 -D_STDC_C99= -qlanglvl=extc99
do
CC="$ac_save_CC $ac_arg"
if ac_fn_c_try_compile "$LINENO"; then :
ac_cv_prog_cc_c99=$ac_arg
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
test x"$pgac_cv_prog_cc_c11" != x"no" && break
rm -f core conftest.err conftest.$ac_objext
test "x$ac_cv_prog_cc_c99" != "xno" && break
done
CC=$pgac_save_CC
rm -f conftest.$ac_ext
CC=$ac_save_CC
fi
# AC_CACHE_VAL
case "x$ac_cv_prog_cc_c99" in
x)
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: none needed" >&5
$as_echo "none needed" >&6; } ;;
xno)
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5
$as_echo "unsupported" >&6; } ;;
*)
CC="$CC $ac_cv_prog_cc_c99"
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c99" >&5
$as_echo "$ac_cv_prog_cc_c99" >&6; } ;;
esac
if test "x$ac_cv_prog_cc_c99" != xno; then :
fi
if test x"$pgac_cv_prog_cc_c11" = x"no"; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5
$as_echo "unsupported" >&6; }
as_fn_error $? "C compiler \"$CC\" does not support C11" "$LINENO" 5
elif test x"$pgac_cv_prog_cc_c11" = x""; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: none needed" >&5
$as_echo "none needed" >&6; }
else
{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $pgac_cv_prog_cc_c11" >&5
$as_echo "$pgac_cv_prog_cc_c11" >&6; }
CC="$CC $pgac_cv_prog_cc_c11"
fi
# Error out if the compiler does not support C99, as the codebase
# relies on that.
if test "$ac_cv_prog_cc_c99" = no; then
as_fn_error $? "C compiler \"$CC\" does not support C99" "$LINENO" 5
fi
ac_ext=cpp
ac_cpp='$CXXCPP $CPPFLAGS'
@ -4771,6 +4920,7 @@ ac_compiler_gnu=$ac_cv_c_compiler_gnu
# Check if it's Intel's compiler, which (usually) pretends to be gcc,
# but has idiosyncrasies of its own. We assume icc will define
# __INTEL_COMPILER regardless of CFLAGS.
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
@ -4791,6 +4941,30 @@ else
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
# Check if it's Sun Studio compiler. We assume that
# __SUNPRO_C will be defined for Sun Studio compilers
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
int
main ()
{
#ifndef __SUNPRO_C
choke me
#endif
;
return 0;
}
_ACEOF
if ac_fn_c_try_compile "$LINENO"; then :
SUN_STUDIO_CC=yes
else
SUN_STUDIO_CC=no
fi
rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
#
# LLVM
@ -6716,7 +6890,7 @@ fi
# __attribute__((visibility("hidden"))) is supported, if we encounter a
# compiler that supports one of the supported variants of -fvisibility=hidden
# but uses a different syntax to mark a symbol as exported.
if test "$GCC" = yes; then
if test "$GCC" = yes -o "$SUN_STUDIO_CC" = yes ; then
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${CC} supports -fvisibility=hidden, for CFLAGS_SL_MODULE" >&5
$as_echo_n "checking whether ${CC} supports -fvisibility=hidden, for CFLAGS_SL_MODULE... " >&6; }
if ${pgac_cv_prog_CC_cflags__fvisibility_hidden+:} false; then :
@ -7699,6 +7873,20 @@ ac_compiler_gnu=$ac_cv_c_compiler_gnu
#
# Set up TAS assembly code if needed; the template file has now had its
# chance to request this.
#
ac_config_links="$ac_config_links src/backend/port/tas.s:src/backend/port/tas/${tas_file}"
if test "$need_tas" = yes ; then
TAS=tas.o
else
TAS=""
fi
cat >>confdefs.h <<_ACEOF
#define DLSUFFIX "$DLSUFFIX"
@ -17095,7 +17283,7 @@ else
/* end confdefs.h. */
/* This must match the corresponding code in c.h: */
#if defined(__GNUC__)
#if defined(__GNUC__) || defined(__SUNPRO_C)
#define pg_attribute_aligned(a) __attribute__((aligned(a)))
#elif defined(_MSC_VER)
#define pg_attribute_aligned(a) __declspec(align(a))
@ -19298,6 +19486,8 @@ fi
if test x"$GCC" = x"yes" ; then
cc_string=`${CC} --version | sed q`
case $cc_string in [A-Za-z]*) ;; *) cc_string="GCC $cc_string";; esac
elif test x"$SUN_STUDIO_CC" = x"yes" ; then
cc_string=`${CC} -V 2>&1 | sed q`
else
cc_string=$CC
fi
@ -19899,7 +20089,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
# report actual input values of CONFIG_FILES etc. instead of their
# values after options handling.
ac_log="
This file was extended by PostgreSQL $as_me 19devel, which was
This file was extended by PostgreSQL $as_me 18beta3, which was
generated by GNU Autoconf 2.69. Invocation command line was
CONFIG_FILES = $CONFIG_FILES
@ -19970,7 +20160,7 @@ _ACEOF
cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1
ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`"
ac_cs_version="\\
PostgreSQL config.status 19devel
PostgreSQL config.status 18beta3
configured by $0, generated by GNU Autoconf 2.69,
with options \\"\$ac_cs_config\\"
@ -20094,6 +20284,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1
for ac_config_target in $ac_config_targets
do
case $ac_config_target in
"src/backend/port/tas.s") CONFIG_LINKS="$CONFIG_LINKS src/backend/port/tas.s:src/backend/port/tas/${tas_file}" ;;
"GNUmakefile") CONFIG_FILES="$CONFIG_FILES GNUmakefile" ;;
"src/Makefile.global") CONFIG_FILES="$CONFIG_FILES src/Makefile.global" ;;
"src/backend/port/pg_sema.c") CONFIG_LINKS="$CONFIG_LINKS src/backend/port/pg_sema.c:${SEMA_IMPLEMENTATION}" ;;

View File

@ -17,7 +17,7 @@ dnl Read the Autoconf manual for details.
dnl
m4_pattern_forbid(^PGAC_)dnl to catch undefined macros
AC_INIT([PostgreSQL], [19devel], [pgsql-bugs@lists.postgresql.org], [], [https://www.postgresql.org/])
AC_INIT([PostgreSQL], [18beta3], [pgsql-bugs@lists.postgresql.org], [], [https://www.postgresql.org/])
m4_if(m4_defn([m4_PACKAGE_VERSION]), [2.69], [], [m4_fatal([Autoconf version 2.69 is required.
Untested combinations of 'autoconf' and PostgreSQL versions are not
@ -95,6 +95,12 @@ AC_MSG_RESULT([$template])
PORTNAME=$template
AC_SUBST(PORTNAME)
# Initialize default assumption that we do not need separate assembly code
# for TAS (test-and-set). This can be overridden by the template file
# when it's executed.
need_tas=no
tas_file=dummy.s
# Default, works for most platforms, override in template file if needed
DLSUFFIX=".so"
@ -358,33 +364,14 @@ pgac_cc_list="gcc cc"
pgac_cxx_list="g++ c++"
AC_PROG_CC([$pgac_cc_list])
AC_PROG_CC_C99()
# Detect option needed for C11
# loosely modeled after code in later Autoconf versions
AC_MSG_CHECKING([for $CC option to accept ISO C11])
AC_CACHE_VAL([pgac_cv_prog_cc_c11],
[pgac_cv_prog_cc_c11=no
pgac_save_CC=$CC
for pgac_arg in '' '-std=gnu11' '-std=c11'; do
CC="$pgac_save_CC $pgac_arg"
AC_COMPILE_IFELSE([AC_LANG_SOURCE([[#if !defined __STDC_VERSION__ || __STDC_VERSION__ < 201112L
# error "Compiler does not advertise C11 conformance"
#endif]])], [[pgac_cv_prog_cc_c11=$pgac_arg]])
test x"$pgac_cv_prog_cc_c11" != x"no" && break
done
CC=$pgac_save_CC])
if test x"$pgac_cv_prog_cc_c11" = x"no"; then
AC_MSG_RESULT([unsupported])
AC_MSG_ERROR([C compiler "$CC" does not support C11])
elif test x"$pgac_cv_prog_cc_c11" = x""; then
AC_MSG_RESULT([none needed])
else
AC_MSG_RESULT([$pgac_cv_prog_cc_c11])
CC="$CC $pgac_cv_prog_cc_c11"
# Error out if the compiler does not support C99, as the codebase
# relies on that.
if test "$ac_cv_prog_cc_c99" = no; then
AC_MSG_ERROR([C compiler "$CC" does not support C99])
fi
AC_PROG_CXX([$pgac_cxx_list])
# Check if it's Intel's compiler, which (usually) pretends to be gcc,
@ -394,6 +381,14 @@ AC_COMPILE_IFELSE([AC_LANG_PROGRAM([], [@%:@ifndef __INTEL_COMPILER
choke me
@%:@endif])], [ICC=yes], [ICC=no])
# Check if it's Sun Studio compiler. We assume that
# __SUNPRO_C will be defined for Sun Studio compilers
AC_COMPILE_IFELSE([AC_LANG_PROGRAM([], [@%:@ifndef __SUNPRO_C
choke me
@%:@endif])], [SUN_STUDIO_CC=yes], [SUN_STUDIO_CC=no])
AC_SUBST(SUN_STUDIO_CC)
#
# LLVM
@ -604,7 +599,7 @@ fi
# __attribute__((visibility("hidden"))) is supported, if we encounter a
# compiler that supports one of the supported variants of -fvisibility=hidden
# but uses a different syntax to mark a symbol as exported.
if test "$GCC" = yes; then
if test "$GCC" = yes -o "$SUN_STUDIO_CC" = yes ; then
PGAC_PROG_CC_VAR_OPT(CFLAGS_SL_MODULE, [-fvisibility=hidden])
# For C++ we additionally want -fvisibility-inlines-hidden
PGAC_PROG_VARCXX_VARFLAGS_OPT(CXX, CXXFLAGS_SL_MODULE, [-fvisibility=hidden])
@ -760,6 +755,19 @@ AC_PROG_CPP
AC_SUBST(GCC)
#
# Set up TAS assembly code if needed; the template file has now had its
# chance to request this.
#
AC_CONFIG_LINKS([src/backend/port/tas.s:src/backend/port/tas/${tas_file}])
if test "$need_tas" = yes ; then
TAS=tas.o
else
TAS=""
fi
AC_SUBST(TAS)
AC_SUBST(DLSUFFIX)dnl
AC_DEFINE_UNQUOTED([DLSUFFIX], ["$DLSUFFIX"],
[Define to the file name extension of dynamically-loadable modules.])
@ -2451,6 +2459,8 @@ AC_SUBST(LDFLAGS_EX_BE)
if test x"$GCC" = x"yes" ; then
cc_string=`${CC} --version | sed q`
case $cc_string in [[A-Za-z]]*) ;; *) cc_string="GCC $cc_string";; esac
elif test x"$SUN_STUDIO_CC" = x"yes" ; then
cc_string=`${CC} -V 2>&1 | sed q`
else
cc_string=$CC
fi

View File

@ -60,14 +60,6 @@ SELECT bt_index_parent_check('bttest_a_brin_idx');
ERROR: expected "btree" index as targets for verification
DETAIL: Relation "bttest_a_brin_idx" is a brin index.
ROLLBACK;
-- verify partitioned indexes are rejected (error)
BEGIN;
CREATE TABLE bttest_partitioned (a int, b int) PARTITION BY list (a);
CREATE INDEX bttest_btree_partitioned_idx ON bttest_partitioned USING btree (b);
SELECT bt_index_parent_check('bttest_btree_partitioned_idx');
ERROR: expected index as targets for verification
DETAIL: This operation is not supported for partitioned indexes.
ROLLBACK;
-- normal check outside of xact
SELECT bt_index_check('bttest_a_idx');
bt_index_check

View File

@ -52,13 +52,6 @@ CREATE INDEX bttest_a_brin_idx ON bttest_a USING brin(id);
SELECT bt_index_parent_check('bttest_a_brin_idx');
ROLLBACK;
-- verify partitioned indexes are rejected (error)
BEGIN;
CREATE TABLE bttest_partitioned (a int, b int) PARTITION BY list (a);
CREATE INDEX bttest_btree_partitioned_idx ON bttest_partitioned USING btree (b);
SELECT bt_index_parent_check('bttest_btree_partitioned_idx');
ROLLBACK;
-- normal check outside of xact
SELECT bt_index_check('bttest_a_idx');
-- more expansive tests

View File

@ -18,13 +18,11 @@
#include "verify_common.h"
#include "catalog/index.h"
#include "catalog/pg_am.h"
#include "commands/defrem.h"
#include "commands/tablecmds.h"
#include "utils/guc.h"
#include "utils/syscache.h"
static bool amcheck_index_mainfork_expected(Relation rel);
static bool index_checkable(Relation rel, Oid am_id);
/*
@ -157,21 +155,23 @@ amcheck_lock_relation_and_check(Oid indrelid,
* callable by non-superusers. If granted, it's useful to be able to check a
* whole cluster.
*/
static bool
bool
index_checkable(Relation rel, Oid am_id)
{
if (rel->rd_rel->relkind != RELKIND_INDEX)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("expected index as targets for verification"),
errdetail_relkind_not_supported(rel->rd_rel->relkind)));
if (rel->rd_rel->relkind != RELKIND_INDEX ||
rel->rd_rel->relam != am_id)
{
HeapTuple amtup;
HeapTuple amtuprel;
if (rel->rd_rel->relam != am_id)
amtup = SearchSysCache1(AMOID, ObjectIdGetDatum(am_id));
amtuprel = SearchSysCache1(AMOID, ObjectIdGetDatum(rel->rd_rel->relam));
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("expected \"%s\" index as targets for verification", get_am_name(am_id)),
errmsg("expected \"%s\" index as targets for verification", NameStr(((Form_pg_am) GETSTRUCT(amtup))->amname)),
errdetail("Relation \"%s\" is a %s index.",
RelationGetRelationName(rel), get_am_name(rel->rd_rel->relam))));
RelationGetRelationName(rel), NameStr(((Form_pg_am) GETSTRUCT(amtuprel))->amname))));
}
if (RELATION_IS_OTHER_TEMP(rel))
ereport(ERROR,
@ -182,7 +182,7 @@ index_checkable(Relation rel, Oid am_id)
if (!rel->rd_index->indisvalid)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("cannot check index \"%s\"",
RelationGetRelationName(rel)),
errdetail("Index is not valid.")));

View File

@ -1,12 +1,12 @@
/*-------------------------------------------------------------------------
*
* verify_common.h
* amcheck.h
* Shared routines for amcheck verifications.
*
* Copyright (c) 2016-2025, PostgreSQL Global Development Group
*
* IDENTIFICATION
* contrib/amcheck/verify_common.h
* contrib/amcheck/amcheck.h
*
*-------------------------------------------------------------------------
*/
@ -16,7 +16,8 @@
#include "utils/relcache.h"
#include "miscadmin.h"
/* Typedef for callback function for amcheck_lock_relation_and_check */
/* Typedefs for callback functions for amcheck_lock_relation_and_check */
typedef void (*IndexCheckableCallback) (Relation index);
typedef void (*IndexDoCheckCallback) (Relation rel,
Relation heaprel,
void *state,
@ -26,3 +27,5 @@ extern void amcheck_lock_relation_and_check(Oid indrelid,
Oid am_id,
IndexDoCheckCallback check,
LOCKMODE lockmode, void *state);
extern bool index_checkable(Relation rel, Oid am_id);

View File

@ -174,7 +174,7 @@ gin_check_posting_tree_parent_keys_consistency(Relation rel, BlockNumber posting
buffer = ReadBufferExtended(rel, MAIN_FORKNUM, stack->blkno,
RBM_NORMAL, strategy);
LockBuffer(buffer, GIN_SHARE);
page = BufferGetPage(buffer);
page = (Page) BufferGetPage(buffer);
Assert(GinPageIsData(page));
@ -434,7 +434,7 @@ gin_check_parent_keys_consistency(Relation rel,
buffer = ReadBufferExtended(rel, MAIN_FORKNUM, stack->blkno,
RBM_NORMAL, strategy);
LockBuffer(buffer, GIN_SHARE);
page = BufferGetPage(buffer);
page = (Page) BufferGetPage(buffer);
maxoff = PageGetMaxOffsetNumber(page);
rightlink = GinPageGetOpaque(page)->rightlink;

View File

@ -1942,7 +1942,7 @@ check_tuple(HeapCheckContext *ctx, bool *xmin_commit_status_ok,
if (RelationGetDescr(ctx->rel)->natts < ctx->natts)
{
report_corruption(ctx,
psprintf("number of attributes %u exceeds maximum %u expected for table",
psprintf("number of attributes %u exceeds maximum expected for table %u",
ctx->natts,
RelationGetDescr(ctx->rel)->natts));
return;

View File

@ -913,7 +913,7 @@ bt_report_duplicate(BtreeCheckState *state,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("index uniqueness is violated for index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail("Index %s%s and%s%s (point to heap %s and %s) page lsn=%X/%08X.",
errdetail("Index %s%s and%s%s (point to heap %s and %s) page lsn=%X/%X.",
itid, pposting, nitid, pnposting, htid, nhtid,
LSN_FORMAT_ARGS(state->targetlsn))));
}
@ -1058,7 +1058,7 @@ bt_leftmost_ignoring_half_dead(BtreeCheckState *state,
(errcode(ERRCODE_NO_DATA),
errmsg_internal("harmless interrupted page deletion detected in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Block=%u right block=%u page lsn=%X/%08X.",
errdetail_internal("Block=%u right block=%u page lsn=%X/%X.",
reached, reached_from,
LSN_FORMAT_ARGS(pagelsn))));
@ -1283,7 +1283,7 @@ bt_target_page_check(BtreeCheckState *state)
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("wrong number of high key index tuple attributes in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Index block=%u natts=%u block type=%s page lsn=%X/%08X.",
errdetail_internal("Index block=%u natts=%u block type=%s page lsn=%X/%X.",
state->targetblock,
BTreeTupleGetNAtts(itup, state->rel),
P_ISLEAF(topaque) ? "heap" : "index",
@ -1332,7 +1332,7 @@ bt_target_page_check(BtreeCheckState *state)
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("index tuple size does not equal lp_len in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Index tid=(%u,%u) tuple size=%zu lp_len=%u page lsn=%X/%08X.",
errdetail_internal("Index tid=(%u,%u) tuple size=%zu lp_len=%u page lsn=%X/%X.",
state->targetblock, offset,
tupsize, ItemIdGetLength(itemid),
LSN_FORMAT_ARGS(state->targetlsn)),
@ -1356,7 +1356,7 @@ bt_target_page_check(BtreeCheckState *state)
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("wrong number of index tuple attributes in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Index tid=%s natts=%u points to %s tid=%s page lsn=%X/%08X.",
errdetail_internal("Index tid=%s natts=%u points to %s tid=%s page lsn=%X/%X.",
itid,
BTreeTupleGetNAtts(itup, state->rel),
P_ISLEAF(topaque) ? "heap" : "index",
@ -1406,7 +1406,7 @@ bt_target_page_check(BtreeCheckState *state)
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("could not find tuple using search from root page in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Index tid=%s points to heap tid=%s page lsn=%X/%08X.",
errdetail_internal("Index tid=%s points to heap tid=%s page lsn=%X/%X.",
itid, htid,
LSN_FORMAT_ARGS(state->targetlsn))));
}
@ -1435,7 +1435,7 @@ bt_target_page_check(BtreeCheckState *state)
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg_internal("posting list contains misplaced TID in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Index tid=%s posting list offset=%d page lsn=%X/%08X.",
errdetail_internal("Index tid=%s posting list offset=%d page lsn=%X/%X.",
itid, i,
LSN_FORMAT_ARGS(state->targetlsn))));
}
@ -1488,7 +1488,7 @@ bt_target_page_check(BtreeCheckState *state)
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("index row size %zu exceeds maximum for index \"%s\"",
tupsize, RelationGetRelationName(state->rel)),
errdetail_internal("Index tid=%s points to %s tid=%s page lsn=%X/%08X.",
errdetail_internal("Index tid=%s points to %s tid=%s page lsn=%X/%X.",
itid,
P_ISLEAF(topaque) ? "heap" : "index",
htid,
@ -1595,7 +1595,7 @@ bt_target_page_check(BtreeCheckState *state)
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("high key invariant violated for index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Index tid=%s points to %s tid=%s page lsn=%X/%08X.",
errdetail_internal("Index tid=%s points to %s tid=%s page lsn=%X/%X.",
itid,
P_ISLEAF(topaque) ? "heap" : "index",
htid,
@ -1641,7 +1641,9 @@ bt_target_page_check(BtreeCheckState *state)
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("item order invariant violated for index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Lower index tid=%s (points to %s tid=%s) higher index tid=%s (points to %s tid=%s) page lsn=%X/%08X.",
errdetail_internal("Lower index tid=%s (points to %s tid=%s) "
"higher index tid=%s (points to %s tid=%s) "
"page lsn=%X/%X.",
itid,
P_ISLEAF(topaque) ? "heap" : "index",
htid,
@ -1758,7 +1760,7 @@ bt_target_page_check(BtreeCheckState *state)
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("cross page item order invariant violated for index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Last item on page tid=(%u,%u) page lsn=%X/%08X.",
errdetail_internal("Last item on page tid=(%u,%u) page lsn=%X/%X.",
state->targetblock, offset,
LSN_FORMAT_ARGS(state->targetlsn))));
}
@ -1811,7 +1813,7 @@ bt_target_page_check(BtreeCheckState *state)
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("right block of leaf block is non-leaf for index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Block=%u page lsn=%X/%08X.",
errdetail_internal("Block=%u page lsn=%X/%X.",
state->targetblock,
LSN_FORMAT_ARGS(state->targetlsn))));
@ -2235,7 +2237,7 @@ bt_child_highkey_check(BtreeCheckState *state,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("the first child of leftmost target page is not leftmost of its level in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Target block=%u child block=%u target page lsn=%X/%08X.",
errdetail_internal("Target block=%u child block=%u target page lsn=%X/%X.",
state->targetblock, blkno,
LSN_FORMAT_ARGS(state->targetlsn))));
@ -2321,7 +2323,7 @@ bt_child_highkey_check(BtreeCheckState *state,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("child high key is greater than rightmost pivot key on target level in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Target block=%u child block=%u target page lsn=%X/%08X.",
errdetail_internal("Target block=%u child block=%u target page lsn=%X/%X.",
state->targetblock, blkno,
LSN_FORMAT_ARGS(state->targetlsn))));
pivotkey_offset = P_HIKEY;
@ -2351,7 +2353,7 @@ bt_child_highkey_check(BtreeCheckState *state,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("can't find left sibling high key in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Target block=%u child block=%u target page lsn=%X/%08X.",
errdetail_internal("Target block=%u child block=%u target page lsn=%X/%X.",
state->targetblock, blkno,
LSN_FORMAT_ARGS(state->targetlsn))));
itup = state->lowkey;
@ -2363,7 +2365,7 @@ bt_child_highkey_check(BtreeCheckState *state,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("mismatch between parent key and child high key in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Target block=%u child block=%u target page lsn=%X/%08X.",
errdetail_internal("Target block=%u child block=%u target page lsn=%X/%X.",
state->targetblock, blkno,
LSN_FORMAT_ARGS(state->targetlsn))));
}
@ -2503,7 +2505,7 @@ bt_child_check(BtreeCheckState *state, BTScanInsert targetkey,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("downlink to deleted page found in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Parent block=%u child block=%u parent page lsn=%X/%08X.",
errdetail_internal("Parent block=%u child block=%u parent page lsn=%X/%X.",
state->targetblock, childblock,
LSN_FORMAT_ARGS(state->targetlsn))));
@ -2544,7 +2546,7 @@ bt_child_check(BtreeCheckState *state, BTScanInsert targetkey,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("down-link lower bound invariant violated for index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Parent block=%u child index tid=(%u,%u) parent page lsn=%X/%08X.",
errdetail_internal("Parent block=%u child index tid=(%u,%u) parent page lsn=%X/%X.",
state->targetblock, childblock, offset,
LSN_FORMAT_ARGS(state->targetlsn))));
}
@ -2614,7 +2616,7 @@ bt_downlink_missing_check(BtreeCheckState *state, bool rightsplit,
(errcode(ERRCODE_NO_DATA),
errmsg_internal("harmless interrupted page split detected in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Block=%u level=%u left sibling=%u page lsn=%X/%08X.",
errdetail_internal("Block=%u level=%u left sibling=%u page lsn=%X/%X.",
blkno, opaque->btpo_level,
opaque->btpo_prev,
LSN_FORMAT_ARGS(pagelsn))));
@ -2636,7 +2638,7 @@ bt_downlink_missing_check(BtreeCheckState *state, bool rightsplit,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("leaf index block lacks downlink in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Block=%u page lsn=%X/%08X.",
errdetail_internal("Block=%u page lsn=%X/%X.",
blkno,
LSN_FORMAT_ARGS(pagelsn))));
@ -2702,7 +2704,7 @@ bt_downlink_missing_check(BtreeCheckState *state, bool rightsplit,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg_internal("downlink to deleted leaf page found in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Top parent/target block=%u leaf block=%u top parent/under check lsn=%X/%08X.",
errdetail_internal("Top parent/target block=%u leaf block=%u top parent/under check lsn=%X/%X.",
blkno, childblk,
LSN_FORMAT_ARGS(pagelsn))));
@ -2728,7 +2730,7 @@ bt_downlink_missing_check(BtreeCheckState *state, bool rightsplit,
(errcode(ERRCODE_INDEX_CORRUPTED),
errmsg("internal index block lacks downlink in index \"%s\"",
RelationGetRelationName(state->rel)),
errdetail_internal("Block=%u level=%u page lsn=%X/%08X.",
errdetail_internal("Block=%u level=%u page lsn=%X/%X.",
blkno, opaque->btpo_level,
LSN_FORMAT_ARGS(pagelsn))));
}

View File

@ -24,7 +24,7 @@ tests += {
'tests': [
't/001_basic.pl',
],
'env': {'GZIP_PROGRAM': gzip.found() ? gzip.full_path() : '',
'TAR': tar.found() ? tar.full_path() : '' },
'env': {'GZIP_PROGRAM': gzip.found() ? gzip.path() : '',
'TAR': tar.found() ? tar.path() : '' },
},
}

View File

@ -65,7 +65,7 @@ void
_PG_init(void)
{
DefineCustomStringVariable("basic_archive.archive_directory",
"Archive file destination directory.",
gettext_noop("Archive file destination directory."),
NULL,
&archive_directory,
"",

View File

@ -192,7 +192,7 @@ blvacuumcleanup(IndexVacuumInfo *info, IndexBulkDeleteResult *stats)
buffer = ReadBufferExtended(index, MAIN_FORKNUM, blkno,
RBM_NORMAL, info->strategy);
LockBuffer(buffer, BUFFER_LOCK_SHARE);
page = BufferGetPage(buffer);
page = (Page) BufferGetPage(buffer);
if (PageIsNew(page) || BloomPageIsDeleted(page))
{

View File

@ -7,7 +7,7 @@ OBJS = \
EXTENSION = btree_gin
DATA = btree_gin--1.0.sql btree_gin--1.0--1.1.sql btree_gin--1.1--1.2.sql \
btree_gin--1.2--1.3.sql btree_gin--1.3--1.4.sql
btree_gin--1.2--1.3.sql
PGFILEDESC = "btree_gin - B-tree equivalent GIN operator classes"
REGRESS = install_btree_gin int2 int4 int8 float4 float8 money oid \

View File

@ -1,151 +0,0 @@
/* contrib/btree_gin/btree_gin--1.3--1.4.sql */
-- complain if script is sourced in psql, rather than via CREATE EXTENSION
\echo Use "ALTER EXTENSION btree_gin UPDATE TO '1.4'" to load this file. \quit
--
-- Cross-type operator support is new in 1.4. We only need to worry
-- about this for cross-type operators that exist in core.
--
-- Because the opclass extractQuery and consistent methods don't directly
-- get any information about the datatype of the RHS value, we have to
-- encode that in the operator strategy numbers. The strategy numbers
-- are the operator's normal btree strategy (1-5) plus 16 times a code
-- for the RHS datatype.
--
ALTER OPERATOR FAMILY int2_ops USING gin
ADD
-- Code 1: RHS is int4
OPERATOR 0x11 < (int2, int4),
OPERATOR 0x12 <= (int2, int4),
OPERATOR 0x13 = (int2, int4),
OPERATOR 0x14 >= (int2, int4),
OPERATOR 0x15 > (int2, int4),
-- Code 2: RHS is int8
OPERATOR 0x21 < (int2, int8),
OPERATOR 0x22 <= (int2, int8),
OPERATOR 0x23 = (int2, int8),
OPERATOR 0x24 >= (int2, int8),
OPERATOR 0x25 > (int2, int8)
;
ALTER OPERATOR FAMILY int4_ops USING gin
ADD
-- Code 1: RHS is int2
OPERATOR 0x11 < (int4, int2),
OPERATOR 0x12 <= (int4, int2),
OPERATOR 0x13 = (int4, int2),
OPERATOR 0x14 >= (int4, int2),
OPERATOR 0x15 > (int4, int2),
-- Code 2: RHS is int8
OPERATOR 0x21 < (int4, int8),
OPERATOR 0x22 <= (int4, int8),
OPERATOR 0x23 = (int4, int8),
OPERATOR 0x24 >= (int4, int8),
OPERATOR 0x25 > (int4, int8)
;
ALTER OPERATOR FAMILY int8_ops USING gin
ADD
-- Code 1: RHS is int2
OPERATOR 0x11 < (int8, int2),
OPERATOR 0x12 <= (int8, int2),
OPERATOR 0x13 = (int8, int2),
OPERATOR 0x14 >= (int8, int2),
OPERATOR 0x15 > (int8, int2),
-- Code 2: RHS is int4
OPERATOR 0x21 < (int8, int4),
OPERATOR 0x22 <= (int8, int4),
OPERATOR 0x23 = (int8, int4),
OPERATOR 0x24 >= (int8, int4),
OPERATOR 0x25 > (int8, int4)
;
ALTER OPERATOR FAMILY float4_ops USING gin
ADD
-- Code 1: RHS is float8
OPERATOR 0x11 < (float4, float8),
OPERATOR 0x12 <= (float4, float8),
OPERATOR 0x13 = (float4, float8),
OPERATOR 0x14 >= (float4, float8),
OPERATOR 0x15 > (float4, float8)
;
ALTER OPERATOR FAMILY float8_ops USING gin
ADD
-- Code 1: RHS is float4
OPERATOR 0x11 < (float8, float4),
OPERATOR 0x12 <= (float8, float4),
OPERATOR 0x13 = (float8, float4),
OPERATOR 0x14 >= (float8, float4),
OPERATOR 0x15 > (float8, float4)
;
ALTER OPERATOR FAMILY text_ops USING gin
ADD
-- Code 1: RHS is name
OPERATOR 0x11 < (text, name),
OPERATOR 0x12 <= (text, name),
OPERATOR 0x13 = (text, name),
OPERATOR 0x14 >= (text, name),
OPERATOR 0x15 > (text, name)
;
ALTER OPERATOR FAMILY name_ops USING gin
ADD
-- Code 1: RHS is text
OPERATOR 0x11 < (name, text),
OPERATOR 0x12 <= (name, text),
OPERATOR 0x13 = (name, text),
OPERATOR 0x14 >= (name, text),
OPERATOR 0x15 > (name, text)
;
ALTER OPERATOR FAMILY date_ops USING gin
ADD
-- Code 1: RHS is timestamp
OPERATOR 0x11 < (date, timestamp),
OPERATOR 0x12 <= (date, timestamp),
OPERATOR 0x13 = (date, timestamp),
OPERATOR 0x14 >= (date, timestamp),
OPERATOR 0x15 > (date, timestamp),
-- Code 2: RHS is timestamptz
OPERATOR 0x21 < (date, timestamptz),
OPERATOR 0x22 <= (date, timestamptz),
OPERATOR 0x23 = (date, timestamptz),
OPERATOR 0x24 >= (date, timestamptz),
OPERATOR 0x25 > (date, timestamptz)
;
ALTER OPERATOR FAMILY timestamp_ops USING gin
ADD
-- Code 1: RHS is date
OPERATOR 0x11 < (timestamp, date),
OPERATOR 0x12 <= (timestamp, date),
OPERATOR 0x13 = (timestamp, date),
OPERATOR 0x14 >= (timestamp, date),
OPERATOR 0x15 > (timestamp, date),
-- Code 2: RHS is timestamptz
OPERATOR 0x21 < (timestamp, timestamptz),
OPERATOR 0x22 <= (timestamp, timestamptz),
OPERATOR 0x23 = (timestamp, timestamptz),
OPERATOR 0x24 >= (timestamp, timestamptz),
OPERATOR 0x25 > (timestamp, timestamptz)
;
ALTER OPERATOR FAMILY timestamptz_ops USING gin
ADD
-- Code 1: RHS is date
OPERATOR 0x11 < (timestamptz, date),
OPERATOR 0x12 <= (timestamptz, date),
OPERATOR 0x13 = (timestamptz, date),
OPERATOR 0x14 >= (timestamptz, date),
OPERATOR 0x15 > (timestamptz, date),
-- Code 2: RHS is timestamp
OPERATOR 0x21 < (timestamptz, timestamp),
OPERATOR 0x22 <= (timestamptz, timestamp),
OPERATOR 0x23 = (timestamptz, timestamp),
OPERATOR 0x24 >= (timestamptz, timestamp),
OPERATOR 0x25 > (timestamptz, timestamp)
;

View File

@ -6,7 +6,6 @@
#include <limits.h>
#include "access/stratnum.h"
#include "mb/pg_wchar.h"
#include "utils/builtins.h"
#include "utils/date.h"
#include "utils/float.h"
@ -14,36 +13,20 @@
#include "utils/numeric.h"
#include "utils/timestamp.h"
#include "utils/uuid.h"
#include "varatt.h"
PG_MODULE_MAGIC_EXT(
.name = "btree_gin",
.version = PG_VERSION
);
/*
* Our opclasses use the same strategy numbers as btree (1-5) for same-type
* comparison operators. For cross-type comparison operators, the
* low 4 bits of our strategy numbers are the btree strategy number,
* and the upper bits are a code for the right-hand-side data type.
*/
#define BTGIN_GET_BTREE_STRATEGY(strat) ((strat) & 0x0F)
#define BTGIN_GET_RHS_TYPE_CODE(strat) ((strat) >> 4)
/* extra data passed from gin_btree_extract_query to gin_btree_compare_prefix */
typedef struct QueryInfo
{
StrategyNumber strategy; /* operator strategy number */
Datum orig_datum; /* original query (comparison) datum */
Datum entry_datum; /* datum we reported as the entry value */
PGFunction typecmp; /* appropriate btree comparison function */
StrategyNumber strategy;
Datum datum;
bool is_varlena;
Datum (*typecmp) (FunctionCallInfo);
} QueryInfo;
typedef Datum (*btree_gin_convert_function) (Datum input);
typedef Datum (*btree_gin_leftmost_function) (void);
/*** GIN support functions shared by all datatypes ***/
static Datum
@ -53,7 +36,6 @@ gin_btree_extract_value(FunctionCallInfo fcinfo, bool is_varlena)
int32 *nentries = (int32 *) PG_GETARG_POINTER(1);
Datum *entries = (Datum *) palloc(sizeof(Datum));
/* Ensure that values stored in the index are not toasted */
if (is_varlena)
datum = PointerGetDatum(PG_DETOAST_DATUM(datum));
entries[0] = datum;
@ -62,12 +44,19 @@ gin_btree_extract_value(FunctionCallInfo fcinfo, bool is_varlena)
PG_RETURN_POINTER(entries);
}
/*
* For BTGreaterEqualStrategyNumber, BTGreaterStrategyNumber, and
* BTEqualStrategyNumber we want to start the index scan at the
* supplied query datum, and work forward. For BTLessStrategyNumber
* and BTLessEqualStrategyNumber, we need to start at the leftmost
* key, and work forward until the supplied query datum (which must be
* sent along inside the QueryInfo structure).
*/
static Datum
gin_btree_extract_query(FunctionCallInfo fcinfo,
btree_gin_leftmost_function leftmostvalue,
const bool *rhs_is_varlena,
const btree_gin_convert_function *cvt_fns,
const PGFunction *cmp_fns)
bool is_varlena,
Datum (*leftmostvalue) (void),
Datum (*typecmp) (FunctionCallInfo))
{
Datum datum = PG_GETARG_DATUM(0);
int32 *nentries = (int32 *) PG_GETARG_POINTER(1);
@ -76,40 +65,21 @@ gin_btree_extract_query(FunctionCallInfo fcinfo,
Pointer **extra_data = (Pointer **) PG_GETARG_POINTER(4);
Datum *entries = (Datum *) palloc(sizeof(Datum));
QueryInfo *data = (QueryInfo *) palloc(sizeof(QueryInfo));
bool *ptr_partialmatch = (bool *) palloc(sizeof(bool));
int btree_strat,
rhs_code;
bool *ptr_partialmatch;
/*
* Extract the btree strategy code and the RHS data type code from the
* given strategy number.
*/
btree_strat = BTGIN_GET_BTREE_STRATEGY(strategy);
rhs_code = BTGIN_GET_RHS_TYPE_CODE(strategy);
/*
* Detoast the comparison datum. This isn't necessary for correctness,
* but it can save repeat detoastings within the comparison function.
*/
if (rhs_is_varlena[rhs_code])
datum = PointerGetDatum(PG_DETOAST_DATUM(datum));
/* Prep single comparison key with possible partial-match flag */
*nentries = 1;
*partialmatch = ptr_partialmatch;
ptr_partialmatch = *partialmatch = (bool *) palloc(sizeof(bool));
*ptr_partialmatch = false;
if (is_varlena)
datum = PointerGetDatum(PG_DETOAST_DATUM(datum));
data->strategy = strategy;
data->datum = datum;
data->is_varlena = is_varlena;
data->typecmp = typecmp;
*extra_data = (Pointer *) palloc(sizeof(Pointer));
**extra_data = (Pointer) data;
/*
* For BTGreaterEqualStrategyNumber, BTGreaterStrategyNumber, and
* BTEqualStrategyNumber we want to start the index scan at the supplied
* query datum, and work forward. For BTLessStrategyNumber and
* BTLessEqualStrategyNumber, we need to start at the leftmost key, and
* work forward until the supplied query datum (which we'll send along
* inside the QueryInfo structure). Use partial match rules except for
* BTEqualStrategyNumber without a conversion function. (If there is a
* conversion function, comparison to the entry value is not trustworthy.)
*/
switch (btree_strat)
switch (strategy)
{
case BTLessStrategyNumber:
case BTLessEqualStrategyNumber:
@ -121,106 +91,75 @@ gin_btree_extract_query(FunctionCallInfo fcinfo,
*ptr_partialmatch = true;
/* FALLTHROUGH */
case BTEqualStrategyNumber:
/* If we have a conversion function, apply it */
if (cvt_fns && cvt_fns[rhs_code])
{
entries[0] = (*cvt_fns[rhs_code]) (datum);
*ptr_partialmatch = true;
}
else
entries[0] = datum;
entries[0] = datum;
break;
default:
elog(ERROR, "unrecognized strategy number: %d", strategy);
}
/* Fill "extra" data */
data->strategy = strategy;
data->orig_datum = datum;
data->entry_datum = entries[0];
data->typecmp = cmp_fns[rhs_code];
*extra_data = (Pointer *) palloc(sizeof(Pointer));
**extra_data = (Pointer) data;
PG_RETURN_POINTER(entries);
}
/*
* Datum a is a value from extract_query method and for BTLess*
* strategy it is a left-most value. So, use original datum from QueryInfo
* to decide to stop scanning or not. Datum b is always from index.
*/
static Datum
gin_btree_compare_prefix(FunctionCallInfo fcinfo)
{
Datum partial_key PG_USED_FOR_ASSERTS_ONLY = PG_GETARG_DATUM(0);
Datum key = PG_GETARG_DATUM(1);
Datum a = PG_GETARG_DATUM(0);
Datum b = PG_GETARG_DATUM(1);
QueryInfo *data = (QueryInfo *) PG_GETARG_POINTER(3);
int32 res,
cmp;
/*
* partial_key is only an approximation to the real comparison value,
* especially if it's a leftmost value. We can get an accurate answer by
* doing a possibly-cross-type comparison to the real comparison value.
* (Note that partial_key and key are of the indexed datatype while
* orig_datum is of the query operator's RHS datatype.)
*
* But just to be sure that things are what we expect, let's assert that
* partial_key is indeed what gin_btree_extract_query reported, so that
* we'll notice if anyone ever changes the core code in a way that breaks
* our assumptions.
*/
Assert(partial_key == data->entry_datum);
cmp = DatumGetInt32(CallerFInfoFunctionCall2(data->typecmp,
fcinfo->flinfo,
PG_GET_COLLATION(),
data->orig_datum,
key));
(data->strategy == BTLessStrategyNumber ||
data->strategy == BTLessEqualStrategyNumber)
? data->datum : a,
b));
/*
* Convert the comparison result to the correct thing for the search
* operator strategy. When dealing with cross-type comparisons, an
* imprecise entry datum could lead GIN to start the scan just before the
* first possible match, so we must continue the scan if the current index
* entry doesn't satisfy the search condition for >= and > cases. But if
* that happens in an = search we can stop, because an imprecise entry
* datum means that the search value is unrepresentable in the indexed
* data type, so that there will be no exact matches.
*/
switch (BTGIN_GET_BTREE_STRATEGY(data->strategy))
switch (data->strategy)
{
case BTLessStrategyNumber:
/* If original datum > indexed one then return match */
if (cmp > 0)
res = 0;
else
res = 1; /* end scan */
res = 1;
break;
case BTLessEqualStrategyNumber:
/* If original datum >= indexed one then return match */
/* The same except equality */
if (cmp >= 0)
res = 0;
else
res = 1; /* end scan */
res = 1;
break;
case BTEqualStrategyNumber:
/* If original datum = indexed one then return match */
/* See above about why we can end scan when cmp < 0 */
if (cmp == 0)
res = 0;
if (cmp != 0)
res = 1;
else
res = 1; /* end scan */
res = 0;
break;
case BTGreaterEqualStrategyNumber:
/* If original datum <= indexed one then return match */
if (cmp <= 0)
res = 0;
else
res = -1; /* keep scanning */
res = 1;
break;
case BTGreaterStrategyNumber:
/* If original datum < indexed one then return match */
/* If original datum <= indexed one then return match */
/* If original datum == indexed one then continue scan */
if (cmp < 0)
res = 0;
else if (cmp == 0)
res = -1;
else
res = -1; /* keep scanning */
res = 1;
break;
default:
elog(ERROR, "unrecognized strategy number: %d",
@ -243,20 +182,19 @@ gin_btree_consistent(PG_FUNCTION_ARGS)
/*** GIN_SUPPORT macro defines the datatype specific functions ***/
#define GIN_SUPPORT(type, leftmostvalue, is_varlena, cvtfns, cmpfns) \
#define GIN_SUPPORT(type, is_varlena, leftmostvalue, typecmp) \
PG_FUNCTION_INFO_V1(gin_extract_value_##type); \
Datum \
gin_extract_value_##type(PG_FUNCTION_ARGS) \
{ \
return gin_btree_extract_value(fcinfo, is_varlena[0]); \
return gin_btree_extract_value(fcinfo, is_varlena); \
} \
PG_FUNCTION_INFO_V1(gin_extract_query_##type); \
Datum \
gin_extract_query_##type(PG_FUNCTION_ARGS) \
{ \
return gin_btree_extract_query(fcinfo, \
leftmostvalue, is_varlena, \
cvtfns, cmpfns); \
is_varlena, leftmostvalue, typecmp); \
} \
PG_FUNCTION_INFO_V1(gin_compare_prefix_##type); \
Datum \
@ -268,66 +206,13 @@ gin_compare_prefix_##type(PG_FUNCTION_ARGS) \
/*** Datatype specifications ***/
/* Function to produce the least possible value of the indexed datatype */
static Datum
leftmostvalue_int2(void)
{
return Int16GetDatum(SHRT_MIN);
}
/*
* For cross-type support, we must provide conversion functions that produce
* a Datum of the indexed datatype, since GIN requires the "entry" datums to
* be of that type. If an exact conversion is not possible, produce a value
* that will lead GIN to find the first index entry that is greater than
* or equal to the actual comparison value. (But rounding down is OK, so
* sometimes we might find an index entry that's just less than the
* comparison value.)
*
* For integer values, it's sufficient to clamp the input to be in-range.
*
* Note: for out-of-range input values, we could in theory detect that the
* search condition matches all or none of the index, and avoid a useless
* index descent in the latter case. Such searches are probably rare though,
* so we don't contort this code enough to do that.
*/
static Datum
cvt_int4_int2(Datum input)
{
int32 val = DatumGetInt32(input);
val = Max(val, SHRT_MIN);
val = Min(val, SHRT_MAX);
return Int16GetDatum((int16) val);
}
static Datum
cvt_int8_int2(Datum input)
{
int64 val = DatumGetInt64(input);
val = Max(val, SHRT_MIN);
val = Min(val, SHRT_MAX);
return Int16GetDatum((int16) val);
}
/*
* RHS-type-is-varlena flags, conversion and comparison function arrays,
* indexed by high bits of the operator strategy number. A NULL in the
* conversion function array indicates that no conversion is needed, which
* will always be the case for the zero'th entry. Note that the cross-type
* comparison functions should be the ones with the indexed datatype second.
*/
static const bool int2_rhs_is_varlena[] =
{false, false, false};
static const btree_gin_convert_function int2_cvt_fns[] =
{NULL, cvt_int4_int2, cvt_int8_int2};
static const PGFunction int2_cmp_fns[] =
{btint2cmp, btint42cmp, btint82cmp};
GIN_SUPPORT(int2, leftmostvalue_int2, int2_rhs_is_varlena, int2_cvt_fns, int2_cmp_fns)
GIN_SUPPORT(int2, false, leftmostvalue_int2, btint2cmp)
static Datum
leftmostvalue_int4(void)
@ -335,34 +220,7 @@ leftmostvalue_int4(void)
return Int32GetDatum(INT_MIN);
}
static Datum
cvt_int2_int4(Datum input)
{
int16 val = DatumGetInt16(input);
return Int32GetDatum((int32) val);
}
static Datum
cvt_int8_int4(Datum input)
{
int64 val = DatumGetInt64(input);
val = Max(val, INT_MIN);
val = Min(val, INT_MAX);
return Int32GetDatum((int32) val);
}
static const bool int4_rhs_is_varlena[] =
{false, false, false};
static const btree_gin_convert_function int4_cvt_fns[] =
{NULL, cvt_int2_int4, cvt_int8_int4};
static const PGFunction int4_cmp_fns[] =
{btint4cmp, btint24cmp, btint84cmp};
GIN_SUPPORT(int4, leftmostvalue_int4, int4_rhs_is_varlena, int4_cvt_fns, int4_cmp_fns)
GIN_SUPPORT(int4, false, leftmostvalue_int4, btint4cmp)
static Datum
leftmostvalue_int8(void)
@ -370,32 +228,7 @@ leftmostvalue_int8(void)
return Int64GetDatum(PG_INT64_MIN);
}
static Datum
cvt_int2_int8(Datum input)
{
int16 val = DatumGetInt16(input);
return Int64GetDatum((int64) val);
}
static Datum
cvt_int4_int8(Datum input)
{
int32 val = DatumGetInt32(input);
return Int64GetDatum((int64) val);
}
static const bool int8_rhs_is_varlena[] =
{false, false, false};
static const btree_gin_convert_function int8_cvt_fns[] =
{NULL, cvt_int2_int8, cvt_int4_int8};
static const PGFunction int8_cmp_fns[] =
{btint8cmp, btint28cmp, btint48cmp};
GIN_SUPPORT(int8, leftmostvalue_int8, int8_rhs_is_varlena, int8_cvt_fns, int8_cmp_fns)
GIN_SUPPORT(int8, false, leftmostvalue_int8, btint8cmp)
static Datum
leftmostvalue_float4(void)
@ -403,34 +236,7 @@ leftmostvalue_float4(void)
return Float4GetDatum(-get_float4_infinity());
}
static Datum
cvt_float8_float4(Datum input)
{
float8 val = DatumGetFloat8(input);
float4 result;
/*
* Assume that ordinary C conversion will produce a usable result.
* (Compare dtof(), which raises error conditions that we don't need.)
* Note that for inputs that aren't exactly representable as float4, it
* doesn't matter whether the conversion rounds up or down. That might
* cause us to scan a few index entries that we'll reject as not matching,
* but we won't miss any that should match.
*/
result = (float4) val;
return Float4GetDatum(result);
}
static const bool float4_rhs_is_varlena[] =
{false, false};
static const btree_gin_convert_function float4_cvt_fns[] =
{NULL, cvt_float8_float4};
static const PGFunction float4_cmp_fns[] =
{btfloat4cmp, btfloat84cmp};
GIN_SUPPORT(float4, leftmostvalue_float4, float4_rhs_is_varlena, float4_cvt_fns, float4_cmp_fns)
GIN_SUPPORT(float4, false, leftmostvalue_float4, btfloat4cmp)
static Datum
leftmostvalue_float8(void)
@ -438,24 +244,7 @@ leftmostvalue_float8(void)
return Float8GetDatum(-get_float8_infinity());
}
static Datum
cvt_float4_float8(Datum input)
{
float4 val = DatumGetFloat4(input);
return Float8GetDatum((float8) val);
}
static const bool float8_rhs_is_varlena[] =
{false, false};
static const btree_gin_convert_function float8_cvt_fns[] =
{NULL, cvt_float4_float8};
static const PGFunction float8_cmp_fns[] =
{btfloat8cmp, btfloat48cmp};
GIN_SUPPORT(float8, leftmostvalue_float8, float8_rhs_is_varlena, float8_cvt_fns, float8_cmp_fns)
GIN_SUPPORT(float8, false, leftmostvalue_float8, btfloat8cmp)
static Datum
leftmostvalue_money(void)
@ -463,13 +252,7 @@ leftmostvalue_money(void)
return Int64GetDatum(PG_INT64_MIN);
}
static const bool money_rhs_is_varlena[] =
{false};
static const PGFunction money_cmp_fns[] =
{cash_cmp};
GIN_SUPPORT(money, leftmostvalue_money, money_rhs_is_varlena, NULL, money_cmp_fns)
GIN_SUPPORT(money, false, leftmostvalue_money, cash_cmp)
static Datum
leftmostvalue_oid(void)
@ -477,13 +260,7 @@ leftmostvalue_oid(void)
return ObjectIdGetDatum(0);
}
static const bool oid_rhs_is_varlena[] =
{false};
static const PGFunction oid_cmp_fns[] =
{btoidcmp};
GIN_SUPPORT(oid, leftmostvalue_oid, oid_rhs_is_varlena, NULL, oid_cmp_fns)
GIN_SUPPORT(oid, false, leftmostvalue_oid, btoidcmp)
static Datum
leftmostvalue_timestamp(void)
@ -491,75 +268,9 @@ leftmostvalue_timestamp(void)
return TimestampGetDatum(DT_NOBEGIN);
}
static Datum
cvt_date_timestamp(Datum input)
{
DateADT val = DatumGetDateADT(input);
Timestamp result;
int overflow;
GIN_SUPPORT(timestamp, false, leftmostvalue_timestamp, timestamp_cmp)
result = date2timestamp_opt_overflow(val, &overflow);
/* We can ignore the overflow result, since result is useful as-is */
return TimestampGetDatum(result);
}
static Datum
cvt_timestamptz_timestamp(Datum input)
{
TimestampTz val = DatumGetTimestampTz(input);
Timestamp result;
int overflow;
result = timestamptz2timestamp_opt_overflow(val, &overflow);
/* We can ignore the overflow result, since result is useful as-is */
return TimestampGetDatum(result);
}
static const bool timestamp_rhs_is_varlena[] =
{false, false, false};
static const btree_gin_convert_function timestamp_cvt_fns[] =
{NULL, cvt_date_timestamp, cvt_timestamptz_timestamp};
static const PGFunction timestamp_cmp_fns[] =
{timestamp_cmp, date_cmp_timestamp, timestamptz_cmp_timestamp};
GIN_SUPPORT(timestamp, leftmostvalue_timestamp, timestamp_rhs_is_varlena, timestamp_cvt_fns, timestamp_cmp_fns)
static Datum
cvt_date_timestamptz(Datum input)
{
DateADT val = DatumGetDateADT(input);
TimestampTz result;
int overflow;
result = date2timestamptz_opt_overflow(val, &overflow);
/* We can ignore the overflow result, since result is useful as-is */
return TimestampTzGetDatum(result);
}
static Datum
cvt_timestamp_timestamptz(Datum input)
{
Timestamp val = DatumGetTimestamp(input);
TimestampTz result;
int overflow;
result = timestamp2timestamptz_opt_overflow(val, &overflow);
/* We can ignore the overflow result, since result is useful as-is */
return TimestampTzGetDatum(result);
}
static const bool timestamptz_rhs_is_varlena[] =
{false, false, false};
static const btree_gin_convert_function timestamptz_cvt_fns[] =
{NULL, cvt_date_timestamptz, cvt_timestamp_timestamptz};
static const PGFunction timestamptz_cmp_fns[] =
{timestamp_cmp, date_cmp_timestamptz, timestamp_cmp_timestamptz};
GIN_SUPPORT(timestamptz, leftmostvalue_timestamp, timestamptz_rhs_is_varlena, timestamptz_cvt_fns, timestamptz_cmp_fns)
GIN_SUPPORT(timestamptz, false, leftmostvalue_timestamp, timestamp_cmp)
static Datum
leftmostvalue_time(void)
@ -567,13 +278,7 @@ leftmostvalue_time(void)
return TimeADTGetDatum(0);
}
static const bool time_rhs_is_varlena[] =
{false};
static const PGFunction time_cmp_fns[] =
{time_cmp};
GIN_SUPPORT(time, leftmostvalue_time, time_rhs_is_varlena, NULL, time_cmp_fns)
GIN_SUPPORT(time, false, leftmostvalue_time, time_cmp)
static Datum
leftmostvalue_timetz(void)
@ -586,13 +291,7 @@ leftmostvalue_timetz(void)
return TimeTzADTPGetDatum(v);
}
static const bool timetz_rhs_is_varlena[] =
{false};
static const PGFunction timetz_cmp_fns[] =
{timetz_cmp};
GIN_SUPPORT(timetz, leftmostvalue_timetz, timetz_rhs_is_varlena, NULL, timetz_cmp_fns)
GIN_SUPPORT(timetz, false, leftmostvalue_timetz, timetz_cmp)
static Datum
leftmostvalue_date(void)
@ -600,40 +299,7 @@ leftmostvalue_date(void)
return DateADTGetDatum(DATEVAL_NOBEGIN);
}
static Datum
cvt_timestamp_date(Datum input)
{
Timestamp val = DatumGetTimestamp(input);
DateADT result;
int overflow;
result = timestamp2date_opt_overflow(val, &overflow);
/* We can ignore the overflow result, since result is useful as-is */
return DateADTGetDatum(result);
}
static Datum
cvt_timestamptz_date(Datum input)
{
TimestampTz val = DatumGetTimestampTz(input);
DateADT result;
int overflow;
result = timestamptz2date_opt_overflow(val, &overflow);
/* We can ignore the overflow result, since result is useful as-is */
return DateADTGetDatum(result);
}
static const bool date_rhs_is_varlena[] =
{false, false, false};
static const btree_gin_convert_function date_cvt_fns[] =
{NULL, cvt_timestamp_date, cvt_timestamptz_date};
static const PGFunction date_cmp_fns[] =
{date_cmp, timestamp_cmp_date, timestamptz_cmp_date};
GIN_SUPPORT(date, leftmostvalue_date, date_rhs_is_varlena, date_cvt_fns, date_cmp_fns)
GIN_SUPPORT(date, false, leftmostvalue_date, date_cmp)
static Datum
leftmostvalue_interval(void)
@ -645,13 +311,7 @@ leftmostvalue_interval(void)
return IntervalPGetDatum(v);
}
static const bool interval_rhs_is_varlena[] =
{false};
static const PGFunction interval_cmp_fns[] =
{interval_cmp};
GIN_SUPPORT(interval, leftmostvalue_interval, interval_rhs_is_varlena, NULL, interval_cmp_fns)
GIN_SUPPORT(interval, false, leftmostvalue_interval, interval_cmp)
static Datum
leftmostvalue_macaddr(void)
@ -661,13 +321,7 @@ leftmostvalue_macaddr(void)
return MacaddrPGetDatum(v);
}
static const bool macaddr_rhs_is_varlena[] =
{false};
static const PGFunction macaddr_cmp_fns[] =
{macaddr_cmp};
GIN_SUPPORT(macaddr, leftmostvalue_macaddr, macaddr_rhs_is_varlena, NULL, macaddr_cmp_fns)
GIN_SUPPORT(macaddr, false, leftmostvalue_macaddr, macaddr_cmp)
static Datum
leftmostvalue_macaddr8(void)
@ -677,13 +331,7 @@ leftmostvalue_macaddr8(void)
return Macaddr8PGetDatum(v);
}
static const bool macaddr8_rhs_is_varlena[] =
{false};
static const PGFunction macaddr8_cmp_fns[] =
{macaddr8_cmp};
GIN_SUPPORT(macaddr8, leftmostvalue_macaddr8, macaddr8_rhs_is_varlena, NULL, macaddr8_cmp_fns)
GIN_SUPPORT(macaddr8, false, leftmostvalue_macaddr8, macaddr8_cmp)
static Datum
leftmostvalue_inet(void)
@ -691,21 +339,9 @@ leftmostvalue_inet(void)
return DirectFunctionCall1(inet_in, CStringGetDatum("0.0.0.0/0"));
}
static const bool inet_rhs_is_varlena[] =
{true};
GIN_SUPPORT(inet, true, leftmostvalue_inet, network_cmp)
static const PGFunction inet_cmp_fns[] =
{network_cmp};
GIN_SUPPORT(inet, leftmostvalue_inet, inet_rhs_is_varlena, NULL, inet_cmp_fns)
static const bool cidr_rhs_is_varlena[] =
{true};
static const PGFunction cidr_cmp_fns[] =
{network_cmp};
GIN_SUPPORT(cidr, leftmostvalue_inet, cidr_rhs_is_varlena, NULL, cidr_cmp_fns)
GIN_SUPPORT(cidr, true, leftmostvalue_inet, network_cmp)
static Datum
leftmostvalue_text(void)
@ -713,32 +349,9 @@ leftmostvalue_text(void)
return PointerGetDatum(cstring_to_text_with_len("", 0));
}
static Datum
cvt_name_text(Datum input)
{
Name val = DatumGetName(input);
GIN_SUPPORT(text, true, leftmostvalue_text, bttextcmp)
return PointerGetDatum(cstring_to_text(NameStr(*val)));
}
static const bool text_rhs_is_varlena[] =
{true, false};
static const btree_gin_convert_function text_cvt_fns[] =
{NULL, cvt_name_text};
static const PGFunction text_cmp_fns[] =
{bttextcmp, btnametextcmp};
GIN_SUPPORT(text, leftmostvalue_text, text_rhs_is_varlena, text_cvt_fns, text_cmp_fns)
static const bool bpchar_rhs_is_varlena[] =
{true};
static const PGFunction bpchar_cmp_fns[] =
{bpcharcmp};
GIN_SUPPORT(bpchar, leftmostvalue_text, bpchar_rhs_is_varlena, NULL, bpchar_cmp_fns)
GIN_SUPPORT(bpchar, true, leftmostvalue_text, bpcharcmp)
static Datum
leftmostvalue_char(void)
@ -746,21 +359,9 @@ leftmostvalue_char(void)
return CharGetDatum(0);
}
static const bool char_rhs_is_varlena[] =
{false};
GIN_SUPPORT(char, false, leftmostvalue_char, btcharcmp)
static const PGFunction char_cmp_fns[] =
{btcharcmp};
GIN_SUPPORT(char, leftmostvalue_char, char_rhs_is_varlena, NULL, char_cmp_fns)
static const bool bytea_rhs_is_varlena[] =
{true};
static const PGFunction bytea_cmp_fns[] =
{byteacmp};
GIN_SUPPORT(bytea, leftmostvalue_text, bytea_rhs_is_varlena, NULL, bytea_cmp_fns)
GIN_SUPPORT(bytea, true, leftmostvalue_text, byteacmp)
static Datum
leftmostvalue_bit(void)
@ -771,13 +372,7 @@ leftmostvalue_bit(void)
Int32GetDatum(-1));
}
static const bool bit_rhs_is_varlena[] =
{true};
static const PGFunction bit_cmp_fns[] =
{bitcmp};
GIN_SUPPORT(bit, leftmostvalue_bit, bit_rhs_is_varlena, NULL, bit_cmp_fns)
GIN_SUPPORT(bit, true, leftmostvalue_bit, bitcmp)
static Datum
leftmostvalue_varbit(void)
@ -788,13 +383,7 @@ leftmostvalue_varbit(void)
Int32GetDatum(-1));
}
static const bool varbit_rhs_is_varlena[] =
{true};
static const PGFunction varbit_cmp_fns[] =
{bitcmp};
GIN_SUPPORT(varbit, leftmostvalue_varbit, varbit_rhs_is_varlena, NULL, varbit_cmp_fns)
GIN_SUPPORT(varbit, true, leftmostvalue_varbit, bitcmp)
/*
* Numeric type hasn't a real left-most value, so we use PointerGetDatum(NULL)
@ -839,13 +428,7 @@ leftmostvalue_numeric(void)
return PointerGetDatum(NULL);
}
static const bool numeric_rhs_is_varlena[] =
{true};
static const PGFunction numeric_cmp_fns[] =
{gin_numeric_cmp};
GIN_SUPPORT(numeric, leftmostvalue_numeric, numeric_rhs_is_varlena, NULL, numeric_cmp_fns)
GIN_SUPPORT(numeric, true, leftmostvalue_numeric, gin_numeric_cmp)
/*
* Use a similar trick to that used for numeric for enums, since we don't
@ -894,13 +477,7 @@ leftmostvalue_enum(void)
return ObjectIdGetDatum(InvalidOid);
}
static const bool enum_rhs_is_varlena[] =
{false};
static const PGFunction enum_cmp_fns[] =
{gin_enum_cmp};
GIN_SUPPORT(anyenum, leftmostvalue_enum, enum_rhs_is_varlena, NULL, enum_cmp_fns)
GIN_SUPPORT(anyenum, false, leftmostvalue_enum, gin_enum_cmp)
static Datum
leftmostvalue_uuid(void)
@ -914,13 +491,7 @@ leftmostvalue_uuid(void)
return UUIDPGetDatum(retval);
}
static const bool uuid_rhs_is_varlena[] =
{false};
static const PGFunction uuid_cmp_fns[] =
{uuid_cmp};
GIN_SUPPORT(uuid, leftmostvalue_uuid, uuid_rhs_is_varlena, NULL, uuid_cmp_fns)
GIN_SUPPORT(uuid, false, leftmostvalue_uuid, uuid_cmp)
static Datum
leftmostvalue_name(void)
@ -930,37 +501,7 @@ leftmostvalue_name(void)
return NameGetDatum(result);
}
static Datum
cvt_text_name(Datum input)
{
text *val = DatumGetTextPP(input);
NameData *result = (NameData *) palloc0(NAMEDATALEN);
int len = VARSIZE_ANY_EXHDR(val);
/*
* Truncate oversize input. We're assuming this will produce a result
* considered less than the original. That could be a bad assumption in
* some collations, but fortunately an index on "name" is generally going
* to use C collation.
*/
if (len >= NAMEDATALEN)
len = pg_mbcliplen(VARDATA_ANY(val), len, NAMEDATALEN - 1);
memcpy(NameStr(*result), VARDATA_ANY(val), len);
return NameGetDatum(result);
}
static const bool name_rhs_is_varlena[] =
{false, true};
static const btree_gin_convert_function name_cvt_fns[] =
{NULL, cvt_text_name};
static const PGFunction name_cmp_fns[] =
{btnamecmp, bttextnamecmp};
GIN_SUPPORT(name, leftmostvalue_name, name_rhs_is_varlena, name_cvt_fns, name_cmp_fns)
GIN_SUPPORT(name, false, leftmostvalue_name, btnamecmp)
static Datum
leftmostvalue_bool(void)
@ -968,10 +509,4 @@ leftmostvalue_bool(void)
return BoolGetDatum(false);
}
static const bool bool_rhs_is_varlena[] =
{false};
static const PGFunction bool_cmp_fns[] =
{btboolcmp};
GIN_SUPPORT(bool, leftmostvalue_bool, bool_rhs_is_varlena, NULL, bool_cmp_fns)
GIN_SUPPORT(bool, false, leftmostvalue_bool, btboolcmp)

View File

@ -1,6 +1,6 @@
# btree_gin extension
comment = 'support for indexing common datatypes in GIN'
default_version = '1.4'
default_version = '1.3'
module_pathname = '$libdir/btree_gin'
relocatable = true
trusted = true

View File

@ -49,365 +49,3 @@ SELECT * FROM test_date WHERE i>'2004-10-26'::date ORDER BY i;
10-28-2004
(2 rows)
explain (costs off)
SELECT * FROM test_date WHERE i<'2004-10-26'::timestamp ORDER BY i;
QUERY PLAN
-----------------------------------------------------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_date
Recheck Cond: (i < 'Tue Oct 26 00:00:00 2004'::timestamp without time zone)
-> Bitmap Index Scan on idx_date
Index Cond: (i < 'Tue Oct 26 00:00:00 2004'::timestamp without time zone)
(6 rows)
SELECT * FROM test_date WHERE i<'2004-10-26'::timestamp ORDER BY i;
i
------------
10-23-2004
10-24-2004
10-25-2004
(3 rows)
SELECT * FROM test_date WHERE i<='2004-10-26'::timestamp ORDER BY i;
i
------------
10-23-2004
10-24-2004
10-25-2004
10-26-2004
(4 rows)
SELECT * FROM test_date WHERE i='2004-10-26'::timestamp ORDER BY i;
i
------------
10-26-2004
(1 row)
SELECT * FROM test_date WHERE i>='2004-10-26'::timestamp ORDER BY i;
i
------------
10-26-2004
10-27-2004
10-28-2004
(3 rows)
SELECT * FROM test_date WHERE i>'2004-10-26'::timestamp ORDER BY i;
i
------------
10-27-2004
10-28-2004
(2 rows)
explain (costs off)
SELECT * FROM test_date WHERE i<'2004-10-26'::timestamptz ORDER BY i;
QUERY PLAN
------------------------------------------------------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_date
Recheck Cond: (i < 'Tue Oct 26 00:00:00 2004 PDT'::timestamp with time zone)
-> Bitmap Index Scan on idx_date
Index Cond: (i < 'Tue Oct 26 00:00:00 2004 PDT'::timestamp with time zone)
(6 rows)
SELECT * FROM test_date WHERE i<'2004-10-26'::timestamptz ORDER BY i;
i
------------
10-23-2004
10-24-2004
10-25-2004
(3 rows)
SELECT * FROM test_date WHERE i<='2004-10-26'::timestamptz ORDER BY i;
i
------------
10-23-2004
10-24-2004
10-25-2004
10-26-2004
(4 rows)
SELECT * FROM test_date WHERE i='2004-10-26'::timestamptz ORDER BY i;
i
------------
10-26-2004
(1 row)
SELECT * FROM test_date WHERE i>='2004-10-26'::timestamptz ORDER BY i;
i
------------
10-26-2004
10-27-2004
10-28-2004
(3 rows)
SELECT * FROM test_date WHERE i>'2004-10-26'::timestamptz ORDER BY i;
i
------------
10-27-2004
10-28-2004
(2 rows)
-- Check endpoint and out-of-range cases
INSERT INTO test_date VALUES ('-infinity'), ('infinity');
SELECT gin_clean_pending_list('idx_date');
gin_clean_pending_list
------------------------
1
(1 row)
SELECT * FROM test_date WHERE i<'-infinity'::timestamp ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_date WHERE i<='-infinity'::timestamp ORDER BY i;
i
-----------
-infinity
(1 row)
SELECT * FROM test_date WHERE i='-infinity'::timestamp ORDER BY i;
i
-----------
-infinity
(1 row)
SELECT * FROM test_date WHERE i>='-infinity'::timestamp ORDER BY i;
i
------------
-infinity
10-23-2004
10-24-2004
10-25-2004
10-26-2004
10-27-2004
10-28-2004
infinity
(8 rows)
SELECT * FROM test_date WHERE i>'-infinity'::timestamp ORDER BY i;
i
------------
10-23-2004
10-24-2004
10-25-2004
10-26-2004
10-27-2004
10-28-2004
infinity
(7 rows)
SELECT * FROM test_date WHERE i<'infinity'::timestamp ORDER BY i;
i
------------
-infinity
10-23-2004
10-24-2004
10-25-2004
10-26-2004
10-27-2004
10-28-2004
(7 rows)
SELECT * FROM test_date WHERE i<='infinity'::timestamp ORDER BY i;
i
------------
-infinity
10-23-2004
10-24-2004
10-25-2004
10-26-2004
10-27-2004
10-28-2004
infinity
(8 rows)
SELECT * FROM test_date WHERE i='infinity'::timestamp ORDER BY i;
i
----------
infinity
(1 row)
SELECT * FROM test_date WHERE i>='infinity'::timestamp ORDER BY i;
i
----------
infinity
(1 row)
SELECT * FROM test_date WHERE i>'infinity'::timestamp ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_date WHERE i<'-infinity'::timestamptz ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_date WHERE i<='-infinity'::timestamptz ORDER BY i;
i
-----------
-infinity
(1 row)
SELECT * FROM test_date WHERE i='-infinity'::timestamptz ORDER BY i;
i
-----------
-infinity
(1 row)
SELECT * FROM test_date WHERE i>='-infinity'::timestamptz ORDER BY i;
i
------------
-infinity
10-23-2004
10-24-2004
10-25-2004
10-26-2004
10-27-2004
10-28-2004
infinity
(8 rows)
SELECT * FROM test_date WHERE i>'-infinity'::timestamptz ORDER BY i;
i
------------
10-23-2004
10-24-2004
10-25-2004
10-26-2004
10-27-2004
10-28-2004
infinity
(7 rows)
SELECT * FROM test_date WHERE i<'infinity'::timestamptz ORDER BY i;
i
------------
-infinity
10-23-2004
10-24-2004
10-25-2004
10-26-2004
10-27-2004
10-28-2004
(7 rows)
SELECT * FROM test_date WHERE i<='infinity'::timestamptz ORDER BY i;
i
------------
-infinity
10-23-2004
10-24-2004
10-25-2004
10-26-2004
10-27-2004
10-28-2004
infinity
(8 rows)
SELECT * FROM test_date WHERE i='infinity'::timestamptz ORDER BY i;
i
----------
infinity
(1 row)
SELECT * FROM test_date WHERE i>='infinity'::timestamptz ORDER BY i;
i
----------
infinity
(1 row)
SELECT * FROM test_date WHERE i>'infinity'::timestamptz ORDER BY i;
i
---
(0 rows)
-- Check rounding cases
-- '2004-10-25 00:00:01' rounds to '2004-10-25' for date.
-- '2004-10-25 23:59:59' also rounds to '2004-10-25',
-- so it's the same case as '2004-10-25 00:00:01'
SELECT * FROM test_date WHERE i < '2004-10-25 00:00:01'::timestamp ORDER BY i;
i
------------
-infinity
10-23-2004
10-24-2004
10-25-2004
(4 rows)
SELECT * FROM test_date WHERE i <= '2004-10-25 00:00:01'::timestamp ORDER BY i;
i
------------
-infinity
10-23-2004
10-24-2004
10-25-2004
(4 rows)
SELECT * FROM test_date WHERE i = '2004-10-25 00:00:01'::timestamp ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_date WHERE i > '2004-10-25 00:00:01'::timestamp ORDER BY i;
i
------------
10-26-2004
10-27-2004
10-28-2004
infinity
(4 rows)
SELECT * FROM test_date WHERE i >= '2004-10-25 00:00:01'::timestamp ORDER BY i;
i
------------
10-26-2004
10-27-2004
10-28-2004
infinity
(4 rows)
SELECT * FROM test_date WHERE i < '2004-10-25 00:00:01'::timestamptz ORDER BY i;
i
------------
-infinity
10-23-2004
10-24-2004
10-25-2004
(4 rows)
SELECT * FROM test_date WHERE i <= '2004-10-25 00:00:01'::timestamptz ORDER BY i;
i
------------
-infinity
10-23-2004
10-24-2004
10-25-2004
(4 rows)
SELECT * FROM test_date WHERE i = '2004-10-25 00:00:01'::timestamptz ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_date WHERE i > '2004-10-25 00:00:01'::timestamptz ORDER BY i;
i
------------
10-26-2004
10-27-2004
10-28-2004
infinity
(4 rows)
SELECT * FROM test_date WHERE i >= '2004-10-25 00:00:01'::timestamptz ORDER BY i;
i
------------
10-26-2004
10-27-2004
10-28-2004
infinity
(4 rows)

View File

@ -42,324 +42,3 @@ SELECT * FROM test_float4 WHERE i>1::float4 ORDER BY i;
3
(2 rows)
explain (costs off)
SELECT * FROM test_float4 WHERE i<1::float8 ORDER BY i;
QUERY PLAN
-------------------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_float4
Recheck Cond: (i < '1'::double precision)
-> Bitmap Index Scan on idx_float4
Index Cond: (i < '1'::double precision)
(6 rows)
SELECT * FROM test_float4 WHERE i<1::float8 ORDER BY i;
i
----
-2
-1
0
(3 rows)
SELECT * FROM test_float4 WHERE i<=1::float8 ORDER BY i;
i
----
-2
-1
0
1
(4 rows)
SELECT * FROM test_float4 WHERE i=1::float8 ORDER BY i;
i
---
1
(1 row)
SELECT * FROM test_float4 WHERE i>=1::float8 ORDER BY i;
i
---
1
2
3
(3 rows)
SELECT * FROM test_float4 WHERE i>1::float8 ORDER BY i;
i
---
2
3
(2 rows)
-- Check endpoint and out-of-range cases
INSERT INTO test_float4 VALUES ('NaN'), ('Inf'), ('-Inf');
SELECT gin_clean_pending_list('idx_float4');
gin_clean_pending_list
------------------------
1
(1 row)
SELECT * FROM test_float4 WHERE i<'-Inf'::float8 ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_float4 WHERE i<='-Inf'::float8 ORDER BY i;
i
-----------
-Infinity
(1 row)
SELECT * FROM test_float4 WHERE i='-Inf'::float8 ORDER BY i;
i
-----------
-Infinity
(1 row)
SELECT * FROM test_float4 WHERE i>='-Inf'::float8 ORDER BY i;
i
-----------
-Infinity
-2
-1
0
1
2
3
Infinity
NaN
(9 rows)
SELECT * FROM test_float4 WHERE i>'-Inf'::float8 ORDER BY i;
i
----------
-2
-1
0
1
2
3
Infinity
NaN
(8 rows)
SELECT * FROM test_float4 WHERE i<'Inf'::float8 ORDER BY i;
i
-----------
-Infinity
-2
-1
0
1
2
3
(7 rows)
SELECT * FROM test_float4 WHERE i<='Inf'::float8 ORDER BY i;
i
-----------
-Infinity
-2
-1
0
1
2
3
Infinity
(8 rows)
SELECT * FROM test_float4 WHERE i='Inf'::float8 ORDER BY i;
i
----------
Infinity
(1 row)
SELECT * FROM test_float4 WHERE i>='Inf'::float8 ORDER BY i;
i
----------
Infinity
NaN
(2 rows)
SELECT * FROM test_float4 WHERE i>'Inf'::float8 ORDER BY i;
i
-----
NaN
(1 row)
SELECT * FROM test_float4 WHERE i<'1e300'::float8 ORDER BY i;
i
-----------
-Infinity
-2
-1
0
1
2
3
(7 rows)
SELECT * FROM test_float4 WHERE i<='1e300'::float8 ORDER BY i;
i
-----------
-Infinity
-2
-1
0
1
2
3
(7 rows)
SELECT * FROM test_float4 WHERE i='1e300'::float8 ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_float4 WHERE i>='1e300'::float8 ORDER BY i;
i
----------
Infinity
NaN
(2 rows)
SELECT * FROM test_float4 WHERE i>'1e300'::float8 ORDER BY i;
i
----------
Infinity
NaN
(2 rows)
SELECT * FROM test_float4 WHERE i<'NaN'::float8 ORDER BY i;
i
-----------
-Infinity
-2
-1
0
1
2
3
Infinity
(8 rows)
SELECT * FROM test_float4 WHERE i<='NaN'::float8 ORDER BY i;
i
-----------
-Infinity
-2
-1
0
1
2
3
Infinity
NaN
(9 rows)
SELECT * FROM test_float4 WHERE i='NaN'::float8 ORDER BY i;
i
-----
NaN
(1 row)
SELECT * FROM test_float4 WHERE i>='NaN'::float8 ORDER BY i;
i
-----
NaN
(1 row)
SELECT * FROM test_float4 WHERE i>'NaN'::float8 ORDER BY i;
i
---
(0 rows)
-- Check rounding cases
-- 1e-300 rounds to 0 for float4 but not for float8
SELECT * FROM test_float4 WHERE i < -1e-300::float8 ORDER BY i;
i
-----------
-Infinity
-2
-1
(3 rows)
SELECT * FROM test_float4 WHERE i <= -1e-300::float8 ORDER BY i;
i
-----------
-Infinity
-2
-1
(3 rows)
SELECT * FROM test_float4 WHERE i = -1e-300::float8 ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_float4 WHERE i > -1e-300::float8 ORDER BY i;
i
----------
0
1
2
3
Infinity
NaN
(6 rows)
SELECT * FROM test_float4 WHERE i >= -1e-300::float8 ORDER BY i;
i
----------
0
1
2
3
Infinity
NaN
(6 rows)
SELECT * FROM test_float4 WHERE i < 1e-300::float8 ORDER BY i;
i
-----------
-Infinity
-2
-1
0
(4 rows)
SELECT * FROM test_float4 WHERE i <= 1e-300::float8 ORDER BY i;
i
-----------
-Infinity
-2
-1
0
(4 rows)
SELECT * FROM test_float4 WHERE i = 1e-300::float8 ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_float4 WHERE i > 1e-300::float8 ORDER BY i;
i
----------
1
2
3
Infinity
NaN
(5 rows)
SELECT * FROM test_float4 WHERE i >= 1e-300::float8 ORDER BY i;
i
----------
1
2
3
Infinity
NaN
(5 rows)

View File

@ -42,53 +42,3 @@ SELECT * FROM test_float8 WHERE i>1::float8 ORDER BY i;
3
(2 rows)
explain (costs off)
SELECT * FROM test_float8 WHERE i<1::float4 ORDER BY i;
QUERY PLAN
---------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_float8
Recheck Cond: (i < '1'::real)
-> Bitmap Index Scan on idx_float8
Index Cond: (i < '1'::real)
(6 rows)
SELECT * FROM test_float8 WHERE i<1::float4 ORDER BY i;
i
----
-2
-1
0
(3 rows)
SELECT * FROM test_float8 WHERE i<=1::float4 ORDER BY i;
i
----
-2
-1
0
1
(4 rows)
SELECT * FROM test_float8 WHERE i=1::float4 ORDER BY i;
i
---
1
(1 row)
SELECT * FROM test_float8 WHERE i>=1::float4 ORDER BY i;
i
---
1
2
3
(3 rows)
SELECT * FROM test_float8 WHERE i>1::float4 ORDER BY i;
i
---
2
3
(2 rows)

View File

@ -42,193 +42,3 @@ SELECT * FROM test_int2 WHERE i>1::int2 ORDER BY i;
3
(2 rows)
explain (costs off)
SELECT * FROM test_int2 WHERE i<1::int4 ORDER BY i;
QUERY PLAN
-------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_int2
Recheck Cond: (i < 1)
-> Bitmap Index Scan on idx_int2
Index Cond: (i < 1)
(6 rows)
SELECT * FROM test_int2 WHERE i<1::int4 ORDER BY i;
i
----
-2
-1
0
(3 rows)
SELECT * FROM test_int2 WHERE i<=1::int4 ORDER BY i;
i
----
-2
-1
0
1
(4 rows)
SELECT * FROM test_int2 WHERE i=1::int4 ORDER BY i;
i
---
1
(1 row)
SELECT * FROM test_int2 WHERE i>=1::int4 ORDER BY i;
i
---
1
2
3
(3 rows)
SELECT * FROM test_int2 WHERE i>1::int4 ORDER BY i;
i
---
2
3
(2 rows)
explain (costs off)
SELECT * FROM test_int2 WHERE i<1::int8 ORDER BY i;
QUERY PLAN
---------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_int2
Recheck Cond: (i < '1'::bigint)
-> Bitmap Index Scan on idx_int2
Index Cond: (i < '1'::bigint)
(6 rows)
SELECT * FROM test_int2 WHERE i<1::int8 ORDER BY i;
i
----
-2
-1
0
(3 rows)
SELECT * FROM test_int2 WHERE i<=1::int8 ORDER BY i;
i
----
-2
-1
0
1
(4 rows)
SELECT * FROM test_int2 WHERE i=1::int8 ORDER BY i;
i
---
1
(1 row)
SELECT * FROM test_int2 WHERE i>=1::int8 ORDER BY i;
i
---
1
2
3
(3 rows)
SELECT * FROM test_int2 WHERE i>1::int8 ORDER BY i;
i
---
2
3
(2 rows)
-- Check endpoint and out-of-range cases
INSERT INTO test_int2 VALUES ((-32768)::int2),(32767);
SELECT gin_clean_pending_list('idx_int2');
gin_clean_pending_list
------------------------
1
(1 row)
SELECT * FROM test_int2 WHERE i<(-32769)::int4 ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_int2 WHERE i<=(-32769)::int4 ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_int2 WHERE i=(-32769)::int4 ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_int2 WHERE i>=(-32769)::int4 ORDER BY i;
i
--------
-32768
-2
-1
0
1
2
3
32767
(8 rows)
SELECT * FROM test_int2 WHERE i>(-32769)::int4 ORDER BY i;
i
--------
-32768
-2
-1
0
1
2
3
32767
(8 rows)
SELECT * FROM test_int2 WHERE i<32768::int4 ORDER BY i;
i
--------
-32768
-2
-1
0
1
2
3
32767
(8 rows)
SELECT * FROM test_int2 WHERE i<=32768::int4 ORDER BY i;
i
--------
-32768
-2
-1
0
1
2
3
32767
(8 rows)
SELECT * FROM test_int2 WHERE i=32768::int4 ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_int2 WHERE i>=32768::int4 ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_int2 WHERE i>32768::int4 ORDER BY i;
i
---
(0 rows)

View File

@ -42,103 +42,3 @@ SELECT * FROM test_int4 WHERE i>1::int4 ORDER BY i;
3
(2 rows)
explain (costs off)
SELECT * FROM test_int4 WHERE i<1::int2 ORDER BY i;
QUERY PLAN
-----------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_int4
Recheck Cond: (i < '1'::smallint)
-> Bitmap Index Scan on idx_int4
Index Cond: (i < '1'::smallint)
(6 rows)
SELECT * FROM test_int4 WHERE i<1::int2 ORDER BY i;
i
----
-2
-1
0
(3 rows)
SELECT * FROM test_int4 WHERE i<=1::int2 ORDER BY i;
i
----
-2
-1
0
1
(4 rows)
SELECT * FROM test_int4 WHERE i=1::int2 ORDER BY i;
i
---
1
(1 row)
SELECT * FROM test_int4 WHERE i>=1::int2 ORDER BY i;
i
---
1
2
3
(3 rows)
SELECT * FROM test_int4 WHERE i>1::int2 ORDER BY i;
i
---
2
3
(2 rows)
explain (costs off)
SELECT * FROM test_int4 WHERE i<1::int8 ORDER BY i;
QUERY PLAN
---------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_int4
Recheck Cond: (i < '1'::bigint)
-> Bitmap Index Scan on idx_int4
Index Cond: (i < '1'::bigint)
(6 rows)
SELECT * FROM test_int4 WHERE i<1::int8 ORDER BY i;
i
----
-2
-1
0
(3 rows)
SELECT * FROM test_int4 WHERE i<=1::int8 ORDER BY i;
i
----
-2
-1
0
1
(4 rows)
SELECT * FROM test_int4 WHERE i=1::int8 ORDER BY i;
i
---
1
(1 row)
SELECT * FROM test_int4 WHERE i>=1::int8 ORDER BY i;
i
---
1
2
3
(3 rows)
SELECT * FROM test_int4 WHERE i>1::int8 ORDER BY i;
i
---
2
3
(2 rows)

View File

@ -42,103 +42,3 @@ SELECT * FROM test_int8 WHERE i>1::int8 ORDER BY i;
3
(2 rows)
explain (costs off)
SELECT * FROM test_int8 WHERE i<1::int2 ORDER BY i;
QUERY PLAN
-----------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_int8
Recheck Cond: (i < '1'::smallint)
-> Bitmap Index Scan on idx_int8
Index Cond: (i < '1'::smallint)
(6 rows)
SELECT * FROM test_int8 WHERE i<1::int2 ORDER BY i;
i
----
-2
-1
0
(3 rows)
SELECT * FROM test_int8 WHERE i<=1::int2 ORDER BY i;
i
----
-2
-1
0
1
(4 rows)
SELECT * FROM test_int8 WHERE i=1::int2 ORDER BY i;
i
---
1
(1 row)
SELECT * FROM test_int8 WHERE i>=1::int2 ORDER BY i;
i
---
1
2
3
(3 rows)
SELECT * FROM test_int8 WHERE i>1::int2 ORDER BY i;
i
---
2
3
(2 rows)
explain (costs off)
SELECT * FROM test_int8 WHERE i<1::int4 ORDER BY i;
QUERY PLAN
-------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_int8
Recheck Cond: (i < 1)
-> Bitmap Index Scan on idx_int8
Index Cond: (i < 1)
(6 rows)
SELECT * FROM test_int8 WHERE i<1::int4 ORDER BY i;
i
----
-2
-1
0
(3 rows)
SELECT * FROM test_int8 WHERE i<=1::int4 ORDER BY i;
i
----
-2
-1
0
1
(4 rows)
SELECT * FROM test_int8 WHERE i=1::int4 ORDER BY i;
i
---
1
(1 row)
SELECT * FROM test_int8 WHERE i>=1::int4 ORDER BY i;
i
---
1
2
3
(3 rows)
SELECT * FROM test_int8 WHERE i>1::int4 ORDER BY i;
i
---
2
3
(2 rows)

View File

@ -95,62 +95,3 @@ EXPLAIN (COSTS OFF) SELECT * FROM test_name WHERE i>'abc' ORDER BY i;
Index Cond: (i > 'abc'::name)
(6 rows)
explain (costs off)
SELECT * FROM test_name WHERE i<'abc'::text ORDER BY i;
QUERY PLAN
---------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_name
Recheck Cond: (i < 'abc'::text)
-> Bitmap Index Scan on idx_name
Index Cond: (i < 'abc'::text)
(6 rows)
SELECT * FROM test_name WHERE i<'abc'::text ORDER BY i;
i
-----
a
ab
abb
(3 rows)
SELECT * FROM test_name WHERE i<='abc'::text ORDER BY i;
i
-----
a
ab
abb
abc
(4 rows)
SELECT * FROM test_name WHERE i='abc'::text ORDER BY i;
i
-----
abc
(1 row)
SELECT * FROM test_name WHERE i>='abc'::text ORDER BY i;
i
-----
abc
axy
xyz
(3 rows)
SELECT * FROM test_name WHERE i>'abc'::text ORDER BY i;
i
-----
axy
xyz
(2 rows)
SELECT * FROM test_name WHERE i<=repeat('abc', 100) ORDER BY i;
i
-----
a
ab
abb
abc
(4 rows)

View File

@ -42,53 +42,3 @@ SELECT * FROM test_text WHERE i>'abc' ORDER BY i;
xyz
(2 rows)
explain (costs off)
SELECT * FROM test_text WHERE i<'abc'::name COLLATE "default" ORDER BY i;
QUERY PLAN
---------------------------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_text
Recheck Cond: (i < 'abc'::name COLLATE "default")
-> Bitmap Index Scan on idx_text
Index Cond: (i < 'abc'::name COLLATE "default")
(6 rows)
SELECT * FROM test_text WHERE i<'abc'::name COLLATE "default" ORDER BY i;
i
-----
a
ab
abb
(3 rows)
SELECT * FROM test_text WHERE i<='abc'::name COLLATE "default" ORDER BY i;
i
-----
a
ab
abb
abc
(4 rows)
SELECT * FROM test_text WHERE i='abc'::name COLLATE "default" ORDER BY i;
i
-----
abc
(1 row)
SELECT * FROM test_text WHERE i>='abc'::name COLLATE "default" ORDER BY i;
i
-----
abc
axy
xyz
(3 rows)
SELECT * FROM test_text WHERE i>'abc'::name COLLATE "default" ORDER BY i;
i
-----
axy
xyz
(2 rows)

View File

@ -7,8 +7,8 @@ INSERT INTO test_timestamp VALUES
( '2004-10-26 04:55:08' ),
( '2004-10-26 05:55:08' ),
( '2004-10-26 08:55:08' ),
( '2004-10-27 09:55:08' ),
( '2004-10-27 10:55:08' )
( '2004-10-26 09:55:08' ),
( '2004-10-26 10:55:08' )
;
CREATE INDEX idx_timestamp ON test_timestamp USING gin (i);
SELECT * FROM test_timestamp WHERE i<'2004-10-26 08:55:08'::timestamp ORDER BY i;
@ -38,308 +38,14 @@ SELECT * FROM test_timestamp WHERE i>='2004-10-26 08:55:08'::timestamp ORDER BY
i
--------------------------
Tue Oct 26 08:55:08 2004
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
Tue Oct 26 09:55:08 2004
Tue Oct 26 10:55:08 2004
(3 rows)
SELECT * FROM test_timestamp WHERE i>'2004-10-26 08:55:08'::timestamp ORDER BY i;
i
--------------------------
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
Tue Oct 26 09:55:08 2004
Tue Oct 26 10:55:08 2004
(2 rows)
explain (costs off)
SELECT * FROM test_timestamp WHERE i<'2004-10-27'::date ORDER BY i;
QUERY PLAN
----------------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_timestamp
Recheck Cond: (i < '10-27-2004'::date)
-> Bitmap Index Scan on idx_timestamp
Index Cond: (i < '10-27-2004'::date)
(6 rows)
SELECT * FROM test_timestamp WHERE i<'2004-10-27'::date ORDER BY i;
i
--------------------------
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
Tue Oct 26 08:55:08 2004
(4 rows)
SELECT * FROM test_timestamp WHERE i<='2004-10-27'::date ORDER BY i;
i
--------------------------
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
Tue Oct 26 08:55:08 2004
(4 rows)
SELECT * FROM test_timestamp WHERE i='2004-10-27'::date ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_timestamp WHERE i>='2004-10-27'::date ORDER BY i;
i
--------------------------
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
(2 rows)
SELECT * FROM test_timestamp WHERE i>'2004-10-27'::date ORDER BY i;
i
--------------------------
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
(2 rows)
explain (costs off)
SELECT * FROM test_timestamp WHERE i<'2004-10-26 08:55:08'::timestamptz ORDER BY i;
QUERY PLAN
------------------------------------------------------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_timestamp
Recheck Cond: (i < 'Tue Oct 26 08:55:08 2004 PDT'::timestamp with time zone)
-> Bitmap Index Scan on idx_timestamp
Index Cond: (i < 'Tue Oct 26 08:55:08 2004 PDT'::timestamp with time zone)
(6 rows)
SELECT * FROM test_timestamp WHERE i<'2004-10-26 08:55:08'::timestamptz ORDER BY i;
i
--------------------------
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
(3 rows)
SELECT * FROM test_timestamp WHERE i<='2004-10-26 08:55:08'::timestamptz ORDER BY i;
i
--------------------------
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
Tue Oct 26 08:55:08 2004
(4 rows)
SELECT * FROM test_timestamp WHERE i='2004-10-26 08:55:08'::timestamptz ORDER BY i;
i
--------------------------
Tue Oct 26 08:55:08 2004
(1 row)
SELECT * FROM test_timestamp WHERE i>='2004-10-26 08:55:08'::timestamptz ORDER BY i;
i
--------------------------
Tue Oct 26 08:55:08 2004
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
(3 rows)
SELECT * FROM test_timestamp WHERE i>'2004-10-26 08:55:08'::timestamptz ORDER BY i;
i
--------------------------
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
(2 rows)
-- Check endpoint and out-of-range cases
INSERT INTO test_timestamp VALUES ('-infinity'), ('infinity');
SELECT gin_clean_pending_list('idx_timestamp');
gin_clean_pending_list
------------------------
1
(1 row)
SELECT * FROM test_timestamp WHERE i<'-infinity'::date ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_timestamp WHERE i<='-infinity'::date ORDER BY i;
i
-----------
-infinity
(1 row)
SELECT * FROM test_timestamp WHERE i='-infinity'::date ORDER BY i;
i
-----------
-infinity
(1 row)
SELECT * FROM test_timestamp WHERE i>='-infinity'::date ORDER BY i;
i
--------------------------
-infinity
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
Tue Oct 26 08:55:08 2004
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
infinity
(8 rows)
SELECT * FROM test_timestamp WHERE i>'-infinity'::date ORDER BY i;
i
--------------------------
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
Tue Oct 26 08:55:08 2004
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
infinity
(7 rows)
SELECT * FROM test_timestamp WHERE i<'infinity'::date ORDER BY i;
i
--------------------------
-infinity
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
Tue Oct 26 08:55:08 2004
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
(7 rows)
SELECT * FROM test_timestamp WHERE i<='infinity'::date ORDER BY i;
i
--------------------------
-infinity
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
Tue Oct 26 08:55:08 2004
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
infinity
(8 rows)
SELECT * FROM test_timestamp WHERE i='infinity'::date ORDER BY i;
i
----------
infinity
(1 row)
SELECT * FROM test_timestamp WHERE i>='infinity'::date ORDER BY i;
i
----------
infinity
(1 row)
SELECT * FROM test_timestamp WHERE i>'infinity'::date ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_timestamp WHERE i<'-infinity'::timestamptz ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_timestamp WHERE i<='-infinity'::timestamptz ORDER BY i;
i
-----------
-infinity
(1 row)
SELECT * FROM test_timestamp WHERE i='-infinity'::timestamptz ORDER BY i;
i
-----------
-infinity
(1 row)
SELECT * FROM test_timestamp WHERE i>='-infinity'::timestamptz ORDER BY i;
i
--------------------------
-infinity
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
Tue Oct 26 08:55:08 2004
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
infinity
(8 rows)
SELECT * FROM test_timestamp WHERE i>'-infinity'::timestamptz ORDER BY i;
i
--------------------------
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
Tue Oct 26 08:55:08 2004
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
infinity
(7 rows)
SELECT * FROM test_timestamp WHERE i<'infinity'::timestamptz ORDER BY i;
i
--------------------------
-infinity
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
Tue Oct 26 08:55:08 2004
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
(7 rows)
SELECT * FROM test_timestamp WHERE i<='infinity'::timestamptz ORDER BY i;
i
--------------------------
-infinity
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
Tue Oct 26 08:55:08 2004
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
infinity
(8 rows)
SELECT * FROM test_timestamp WHERE i='infinity'::timestamptz ORDER BY i;
i
----------
infinity
(1 row)
SELECT * FROM test_timestamp WHERE i>='infinity'::timestamptz ORDER BY i;
i
----------
infinity
(1 row)
SELECT * FROM test_timestamp WHERE i>'infinity'::timestamptz ORDER BY i;
i
---
(0 rows)
-- This PST timestamptz will underflow if converted to timestamp
SELECT * FROM test_timestamp WHERE i<='4714-11-23 17:00 BC'::timestamptz ORDER BY i;
i
-----------
-infinity
(1 row)
SELECT * FROM test_timestamp WHERE i>'4714-11-23 17:00 BC'::timestamptz ORDER BY i;
i
--------------------------
Tue Oct 26 03:55:08 2004
Tue Oct 26 04:55:08 2004
Tue Oct 26 05:55:08 2004
Tue Oct 26 08:55:08 2004
Wed Oct 27 09:55:08 2004
Wed Oct 27 10:55:08 2004
infinity
(7 rows)

View File

@ -7,8 +7,8 @@ INSERT INTO test_timestamptz VALUES
( '2004-10-26 04:55:08' ),
( '2004-10-26 05:55:08' ),
( '2004-10-26 08:55:08' ),
( '2004-10-27 09:55:08' ),
( '2004-10-27 10:55:08' )
( '2004-10-26 09:55:08' ),
( '2004-10-26 10:55:08' )
;
CREATE INDEX idx_timestamptz ON test_timestamptz USING gin (i);
SELECT * FROM test_timestamptz WHERE i<'2004-10-26 08:55:08'::timestamptz ORDER BY i;
@ -38,113 +38,14 @@ SELECT * FROM test_timestamptz WHERE i>='2004-10-26 08:55:08'::timestamptz ORDER
i
------------------------------
Tue Oct 26 08:55:08 2004 PDT
Wed Oct 27 09:55:08 2004 PDT
Wed Oct 27 10:55:08 2004 PDT
Tue Oct 26 09:55:08 2004 PDT
Tue Oct 26 10:55:08 2004 PDT
(3 rows)
SELECT * FROM test_timestamptz WHERE i>'2004-10-26 08:55:08'::timestamptz ORDER BY i;
i
------------------------------
Wed Oct 27 09:55:08 2004 PDT
Wed Oct 27 10:55:08 2004 PDT
(2 rows)
explain (costs off)
SELECT * FROM test_timestamptz WHERE i<'2004-10-27'::date ORDER BY i;
QUERY PLAN
----------------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_timestamptz
Recheck Cond: (i < '10-27-2004'::date)
-> Bitmap Index Scan on idx_timestamptz
Index Cond: (i < '10-27-2004'::date)
(6 rows)
SELECT * FROM test_timestamptz WHERE i<'2004-10-27'::date ORDER BY i;
i
------------------------------
Tue Oct 26 03:55:08 2004 PDT
Tue Oct 26 04:55:08 2004 PDT
Tue Oct 26 05:55:08 2004 PDT
Tue Oct 26 08:55:08 2004 PDT
(4 rows)
SELECT * FROM test_timestamptz WHERE i<='2004-10-27'::date ORDER BY i;
i
------------------------------
Tue Oct 26 03:55:08 2004 PDT
Tue Oct 26 04:55:08 2004 PDT
Tue Oct 26 05:55:08 2004 PDT
Tue Oct 26 08:55:08 2004 PDT
(4 rows)
SELECT * FROM test_timestamptz WHERE i='2004-10-27'::date ORDER BY i;
i
---
(0 rows)
SELECT * FROM test_timestamptz WHERE i>='2004-10-27'::date ORDER BY i;
i
------------------------------
Wed Oct 27 09:55:08 2004 PDT
Wed Oct 27 10:55:08 2004 PDT
(2 rows)
SELECT * FROM test_timestamptz WHERE i>'2004-10-27'::date ORDER BY i;
i
------------------------------
Wed Oct 27 09:55:08 2004 PDT
Wed Oct 27 10:55:08 2004 PDT
(2 rows)
explain (costs off)
SELECT * FROM test_timestamptz WHERE i<'2004-10-26 08:55:08'::timestamp ORDER BY i;
QUERY PLAN
-----------------------------------------------------------------------------------------
Sort
Sort Key: i
-> Bitmap Heap Scan on test_timestamptz
Recheck Cond: (i < 'Tue Oct 26 08:55:08 2004'::timestamp without time zone)
-> Bitmap Index Scan on idx_timestamptz
Index Cond: (i < 'Tue Oct 26 08:55:08 2004'::timestamp without time zone)
(6 rows)
SELECT * FROM test_timestamptz WHERE i<'2004-10-26 08:55:08'::timestamp ORDER BY i;
i
------------------------------
Tue Oct 26 03:55:08 2004 PDT
Tue Oct 26 04:55:08 2004 PDT
Tue Oct 26 05:55:08 2004 PDT
(3 rows)
SELECT * FROM test_timestamptz WHERE i<='2004-10-26 08:55:08'::timestamp ORDER BY i;
i
------------------------------
Tue Oct 26 03:55:08 2004 PDT
Tue Oct 26 04:55:08 2004 PDT
Tue Oct 26 05:55:08 2004 PDT
Tue Oct 26 08:55:08 2004 PDT
(4 rows)
SELECT * FROM test_timestamptz WHERE i='2004-10-26 08:55:08'::timestamp ORDER BY i;
i
------------------------------
Tue Oct 26 08:55:08 2004 PDT
(1 row)
SELECT * FROM test_timestamptz WHERE i>='2004-10-26 08:55:08'::timestamp ORDER BY i;
i
------------------------------
Tue Oct 26 08:55:08 2004 PDT
Wed Oct 27 09:55:08 2004 PDT
Wed Oct 27 10:55:08 2004 PDT
(3 rows)
SELECT * FROM test_timestamptz WHERE i>'2004-10-26 08:55:08'::timestamp ORDER BY i;
i
------------------------------
Wed Oct 27 09:55:08 2004 PDT
Wed Oct 27 10:55:08 2004 PDT
Tue Oct 26 09:55:08 2004 PDT
Tue Oct 26 10:55:08 2004 PDT
(2 rows)

View File

@ -22,7 +22,6 @@ install_data(
'btree_gin--1.0--1.1.sql',
'btree_gin--1.1--1.2.sql',
'btree_gin--1.2--1.3.sql',
'btree_gin--1.3--1.4.sql',
kwargs: contrib_data_args,
)

View File

@ -20,67 +20,3 @@ SELECT * FROM test_date WHERE i<='2004-10-26'::date ORDER BY i;
SELECT * FROM test_date WHERE i='2004-10-26'::date ORDER BY i;
SELECT * FROM test_date WHERE i>='2004-10-26'::date ORDER BY i;
SELECT * FROM test_date WHERE i>'2004-10-26'::date ORDER BY i;
explain (costs off)
SELECT * FROM test_date WHERE i<'2004-10-26'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i<'2004-10-26'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i<='2004-10-26'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i='2004-10-26'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i>='2004-10-26'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i>'2004-10-26'::timestamp ORDER BY i;
explain (costs off)
SELECT * FROM test_date WHERE i<'2004-10-26'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i<'2004-10-26'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i<='2004-10-26'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i='2004-10-26'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i>='2004-10-26'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i>'2004-10-26'::timestamptz ORDER BY i;
-- Check endpoint and out-of-range cases
INSERT INTO test_date VALUES ('-infinity'), ('infinity');
SELECT gin_clean_pending_list('idx_date');
SELECT * FROM test_date WHERE i<'-infinity'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i<='-infinity'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i='-infinity'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i>='-infinity'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i>'-infinity'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i<'infinity'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i<='infinity'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i='infinity'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i>='infinity'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i>'infinity'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i<'-infinity'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i<='-infinity'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i='-infinity'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i>='-infinity'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i>'-infinity'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i<'infinity'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i<='infinity'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i='infinity'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i>='infinity'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i>'infinity'::timestamptz ORDER BY i;
-- Check rounding cases
-- '2004-10-25 00:00:01' rounds to '2004-10-25' for date.
-- '2004-10-25 23:59:59' also rounds to '2004-10-25',
-- so it's the same case as '2004-10-25 00:00:01'
SELECT * FROM test_date WHERE i < '2004-10-25 00:00:01'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i <= '2004-10-25 00:00:01'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i = '2004-10-25 00:00:01'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i > '2004-10-25 00:00:01'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i >= '2004-10-25 00:00:01'::timestamp ORDER BY i;
SELECT * FROM test_date WHERE i < '2004-10-25 00:00:01'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i <= '2004-10-25 00:00:01'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i = '2004-10-25 00:00:01'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i > '2004-10-25 00:00:01'::timestamptz ORDER BY i;
SELECT * FROM test_date WHERE i >= '2004-10-25 00:00:01'::timestamptz ORDER BY i;

View File

@ -13,56 +13,3 @@ SELECT * FROM test_float4 WHERE i<=1::float4 ORDER BY i;
SELECT * FROM test_float4 WHERE i=1::float4 ORDER BY i;
SELECT * FROM test_float4 WHERE i>=1::float4 ORDER BY i;
SELECT * FROM test_float4 WHERE i>1::float4 ORDER BY i;
explain (costs off)
SELECT * FROM test_float4 WHERE i<1::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i<1::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i<=1::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i=1::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i>=1::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i>1::float8 ORDER BY i;
-- Check endpoint and out-of-range cases
INSERT INTO test_float4 VALUES ('NaN'), ('Inf'), ('-Inf');
SELECT gin_clean_pending_list('idx_float4');
SELECT * FROM test_float4 WHERE i<'-Inf'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i<='-Inf'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i='-Inf'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i>='-Inf'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i>'-Inf'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i<'Inf'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i<='Inf'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i='Inf'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i>='Inf'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i>'Inf'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i<'1e300'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i<='1e300'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i='1e300'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i>='1e300'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i>'1e300'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i<'NaN'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i<='NaN'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i='NaN'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i>='NaN'::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i>'NaN'::float8 ORDER BY i;
-- Check rounding cases
-- 1e-300 rounds to 0 for float4 but not for float8
SELECT * FROM test_float4 WHERE i < -1e-300::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i <= -1e-300::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i = -1e-300::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i > -1e-300::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i >= -1e-300::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i < 1e-300::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i <= 1e-300::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i = 1e-300::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i > 1e-300::float8 ORDER BY i;
SELECT * FROM test_float4 WHERE i >= 1e-300::float8 ORDER BY i;

View File

@ -13,12 +13,3 @@ SELECT * FROM test_float8 WHERE i<=1::float8 ORDER BY i;
SELECT * FROM test_float8 WHERE i=1::float8 ORDER BY i;
SELECT * FROM test_float8 WHERE i>=1::float8 ORDER BY i;
SELECT * FROM test_float8 WHERE i>1::float8 ORDER BY i;
explain (costs off)
SELECT * FROM test_float8 WHERE i<1::float4 ORDER BY i;
SELECT * FROM test_float8 WHERE i<1::float4 ORDER BY i;
SELECT * FROM test_float8 WHERE i<=1::float4 ORDER BY i;
SELECT * FROM test_float8 WHERE i=1::float4 ORDER BY i;
SELECT * FROM test_float8 WHERE i>=1::float4 ORDER BY i;
SELECT * FROM test_float8 WHERE i>1::float4 ORDER BY i;

View File

@ -13,38 +13,3 @@ SELECT * FROM test_int2 WHERE i<=1::int2 ORDER BY i;
SELECT * FROM test_int2 WHERE i=1::int2 ORDER BY i;
SELECT * FROM test_int2 WHERE i>=1::int2 ORDER BY i;
SELECT * FROM test_int2 WHERE i>1::int2 ORDER BY i;
explain (costs off)
SELECT * FROM test_int2 WHERE i<1::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i<1::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i<=1::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i=1::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i>=1::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i>1::int4 ORDER BY i;
explain (costs off)
SELECT * FROM test_int2 WHERE i<1::int8 ORDER BY i;
SELECT * FROM test_int2 WHERE i<1::int8 ORDER BY i;
SELECT * FROM test_int2 WHERE i<=1::int8 ORDER BY i;
SELECT * FROM test_int2 WHERE i=1::int8 ORDER BY i;
SELECT * FROM test_int2 WHERE i>=1::int8 ORDER BY i;
SELECT * FROM test_int2 WHERE i>1::int8 ORDER BY i;
-- Check endpoint and out-of-range cases
INSERT INTO test_int2 VALUES ((-32768)::int2),(32767);
SELECT gin_clean_pending_list('idx_int2');
SELECT * FROM test_int2 WHERE i<(-32769)::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i<=(-32769)::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i=(-32769)::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i>=(-32769)::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i>(-32769)::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i<32768::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i<=32768::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i=32768::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i>=32768::int4 ORDER BY i;
SELECT * FROM test_int2 WHERE i>32768::int4 ORDER BY i;

View File

@ -13,21 +13,3 @@ SELECT * FROM test_int4 WHERE i<=1::int4 ORDER BY i;
SELECT * FROM test_int4 WHERE i=1::int4 ORDER BY i;
SELECT * FROM test_int4 WHERE i>=1::int4 ORDER BY i;
SELECT * FROM test_int4 WHERE i>1::int4 ORDER BY i;
explain (costs off)
SELECT * FROM test_int4 WHERE i<1::int2 ORDER BY i;
SELECT * FROM test_int4 WHERE i<1::int2 ORDER BY i;
SELECT * FROM test_int4 WHERE i<=1::int2 ORDER BY i;
SELECT * FROM test_int4 WHERE i=1::int2 ORDER BY i;
SELECT * FROM test_int4 WHERE i>=1::int2 ORDER BY i;
SELECT * FROM test_int4 WHERE i>1::int2 ORDER BY i;
explain (costs off)
SELECT * FROM test_int4 WHERE i<1::int8 ORDER BY i;
SELECT * FROM test_int4 WHERE i<1::int8 ORDER BY i;
SELECT * FROM test_int4 WHERE i<=1::int8 ORDER BY i;
SELECT * FROM test_int4 WHERE i=1::int8 ORDER BY i;
SELECT * FROM test_int4 WHERE i>=1::int8 ORDER BY i;
SELECT * FROM test_int4 WHERE i>1::int8 ORDER BY i;

View File

@ -13,21 +13,3 @@ SELECT * FROM test_int8 WHERE i<=1::int8 ORDER BY i;
SELECT * FROM test_int8 WHERE i=1::int8 ORDER BY i;
SELECT * FROM test_int8 WHERE i>=1::int8 ORDER BY i;
SELECT * FROM test_int8 WHERE i>1::int8 ORDER BY i;
explain (costs off)
SELECT * FROM test_int8 WHERE i<1::int2 ORDER BY i;
SELECT * FROM test_int8 WHERE i<1::int2 ORDER BY i;
SELECT * FROM test_int8 WHERE i<=1::int2 ORDER BY i;
SELECT * FROM test_int8 WHERE i=1::int2 ORDER BY i;
SELECT * FROM test_int8 WHERE i>=1::int2 ORDER BY i;
SELECT * FROM test_int8 WHERE i>1::int2 ORDER BY i;
explain (costs off)
SELECT * FROM test_int8 WHERE i<1::int4 ORDER BY i;
SELECT * FROM test_int8 WHERE i<1::int4 ORDER BY i;
SELECT * FROM test_int8 WHERE i<=1::int4 ORDER BY i;
SELECT * FROM test_int8 WHERE i=1::int4 ORDER BY i;
SELECT * FROM test_int8 WHERE i>=1::int4 ORDER BY i;
SELECT * FROM test_int8 WHERE i>1::int4 ORDER BY i;

View File

@ -19,14 +19,3 @@ EXPLAIN (COSTS OFF) SELECT * FROM test_name WHERE i<='abc' ORDER BY i;
EXPLAIN (COSTS OFF) SELECT * FROM test_name WHERE i='abc' ORDER BY i;
EXPLAIN (COSTS OFF) SELECT * FROM test_name WHERE i>='abc' ORDER BY i;
EXPLAIN (COSTS OFF) SELECT * FROM test_name WHERE i>'abc' ORDER BY i;
explain (costs off)
SELECT * FROM test_name WHERE i<'abc'::text ORDER BY i;
SELECT * FROM test_name WHERE i<'abc'::text ORDER BY i;
SELECT * FROM test_name WHERE i<='abc'::text ORDER BY i;
SELECT * FROM test_name WHERE i='abc'::text ORDER BY i;
SELECT * FROM test_name WHERE i>='abc'::text ORDER BY i;
SELECT * FROM test_name WHERE i>'abc'::text ORDER BY i;
SELECT * FROM test_name WHERE i<=repeat('abc', 100) ORDER BY i;

View File

@ -13,12 +13,3 @@ SELECT * FROM test_text WHERE i<='abc' ORDER BY i;
SELECT * FROM test_text WHERE i='abc' ORDER BY i;
SELECT * FROM test_text WHERE i>='abc' ORDER BY i;
SELECT * FROM test_text WHERE i>'abc' ORDER BY i;
explain (costs off)
SELECT * FROM test_text WHERE i<'abc'::name COLLATE "default" ORDER BY i;
SELECT * FROM test_text WHERE i<'abc'::name COLLATE "default" ORDER BY i;
SELECT * FROM test_text WHERE i<='abc'::name COLLATE "default" ORDER BY i;
SELECT * FROM test_text WHERE i='abc'::name COLLATE "default" ORDER BY i;
SELECT * FROM test_text WHERE i>='abc'::name COLLATE "default" ORDER BY i;
SELECT * FROM test_text WHERE i>'abc'::name COLLATE "default" ORDER BY i;

View File

@ -9,8 +9,8 @@ INSERT INTO test_timestamp VALUES
( '2004-10-26 04:55:08' ),
( '2004-10-26 05:55:08' ),
( '2004-10-26 08:55:08' ),
( '2004-10-27 09:55:08' ),
( '2004-10-27 10:55:08' )
( '2004-10-26 09:55:08' ),
( '2004-10-26 10:55:08' )
;
CREATE INDEX idx_timestamp ON test_timestamp USING gin (i);
@ -20,54 +20,3 @@ SELECT * FROM test_timestamp WHERE i<='2004-10-26 08:55:08'::timestamp ORDER BY
SELECT * FROM test_timestamp WHERE i='2004-10-26 08:55:08'::timestamp ORDER BY i;
SELECT * FROM test_timestamp WHERE i>='2004-10-26 08:55:08'::timestamp ORDER BY i;
SELECT * FROM test_timestamp WHERE i>'2004-10-26 08:55:08'::timestamp ORDER BY i;
explain (costs off)
SELECT * FROM test_timestamp WHERE i<'2004-10-27'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i<'2004-10-27'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i<='2004-10-27'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i='2004-10-27'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i>='2004-10-27'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i>'2004-10-27'::date ORDER BY i;
explain (costs off)
SELECT * FROM test_timestamp WHERE i<'2004-10-26 08:55:08'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i<'2004-10-26 08:55:08'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i<='2004-10-26 08:55:08'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i='2004-10-26 08:55:08'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i>='2004-10-26 08:55:08'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i>'2004-10-26 08:55:08'::timestamptz ORDER BY i;
-- Check endpoint and out-of-range cases
INSERT INTO test_timestamp VALUES ('-infinity'), ('infinity');
SELECT gin_clean_pending_list('idx_timestamp');
SELECT * FROM test_timestamp WHERE i<'-infinity'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i<='-infinity'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i='-infinity'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i>='-infinity'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i>'-infinity'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i<'infinity'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i<='infinity'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i='infinity'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i>='infinity'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i>'infinity'::date ORDER BY i;
SELECT * FROM test_timestamp WHERE i<'-infinity'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i<='-infinity'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i='-infinity'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i>='-infinity'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i>'-infinity'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i<'infinity'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i<='infinity'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i='infinity'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i>='infinity'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i>'infinity'::timestamptz ORDER BY i;
-- This PST timestamptz will underflow if converted to timestamp
SELECT * FROM test_timestamp WHERE i<='4714-11-23 17:00 BC'::timestamptz ORDER BY i;
SELECT * FROM test_timestamp WHERE i>'4714-11-23 17:00 BC'::timestamptz ORDER BY i;

View File

@ -9,8 +9,8 @@ INSERT INTO test_timestamptz VALUES
( '2004-10-26 04:55:08' ),
( '2004-10-26 05:55:08' ),
( '2004-10-26 08:55:08' ),
( '2004-10-27 09:55:08' ),
( '2004-10-27 10:55:08' )
( '2004-10-26 09:55:08' ),
( '2004-10-26 10:55:08' )
;
CREATE INDEX idx_timestamptz ON test_timestamptz USING gin (i);
@ -20,21 +20,3 @@ SELECT * FROM test_timestamptz WHERE i<='2004-10-26 08:55:08'::timestamptz ORDER
SELECT * FROM test_timestamptz WHERE i='2004-10-26 08:55:08'::timestamptz ORDER BY i;
SELECT * FROM test_timestamptz WHERE i>='2004-10-26 08:55:08'::timestamptz ORDER BY i;
SELECT * FROM test_timestamptz WHERE i>'2004-10-26 08:55:08'::timestamptz ORDER BY i;
explain (costs off)
SELECT * FROM test_timestamptz WHERE i<'2004-10-27'::date ORDER BY i;
SELECT * FROM test_timestamptz WHERE i<'2004-10-27'::date ORDER BY i;
SELECT * FROM test_timestamptz WHERE i<='2004-10-27'::date ORDER BY i;
SELECT * FROM test_timestamptz WHERE i='2004-10-27'::date ORDER BY i;
SELECT * FROM test_timestamptz WHERE i>='2004-10-27'::date ORDER BY i;
SELECT * FROM test_timestamptz WHERE i>'2004-10-27'::date ORDER BY i;
explain (costs off)
SELECT * FROM test_timestamptz WHERE i<'2004-10-26 08:55:08'::timestamp ORDER BY i;
SELECT * FROM test_timestamptz WHERE i<'2004-10-26 08:55:08'::timestamp ORDER BY i;
SELECT * FROM test_timestamptz WHERE i<='2004-10-26 08:55:08'::timestamp ORDER BY i;
SELECT * FROM test_timestamptz WHERE i='2004-10-26 08:55:08'::timestamp ORDER BY i;
SELECT * FROM test_timestamptz WHERE i>='2004-10-26 08:55:08'::timestamp ORDER BY i;
SELECT * FROM test_timestamptz WHERE i>'2004-10-26 08:55:08'::timestamp ORDER BY i;

View File

@ -5,7 +5,6 @@
#include "btree_gist.h"
#include "btree_utils_num.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct boolkey

View File

@ -7,7 +7,6 @@
#include "btree_utils_num.h"
#include "common/int.h"
#include "utils/cash.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct

View File

@ -7,7 +7,6 @@
#include "btree_utils_num.h"
#include "utils/fmgrprotos.h"
#include "utils/date.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct

View File

@ -8,7 +8,6 @@
#include "fmgr.h"
#include "utils/fmgrprotos.h"
#include "utils/fmgroids.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
/* enums are really Oids, so we just use the same structure */
@ -194,8 +193,8 @@ gbt_enum_ssup_cmp(Datum x, Datum y, SortSupport ssup)
return DatumGetInt32(CallerFInfoFunctionCall2(enum_cmp,
ssup->ssup_extra,
InvalidOid,
ObjectIdGetDatum(arg1->lower),
ObjectIdGetDatum(arg2->lower)));
arg1->lower,
arg2->lower));
}
Datum

View File

@ -6,7 +6,6 @@
#include "btree_gist.h"
#include "btree_utils_num.h"
#include "utils/float.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct float4key

View File

@ -6,7 +6,6 @@
#include "btree_gist.h"
#include "btree_utils_num.h"
#include "utils/float.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct float8key

View File

@ -7,7 +7,6 @@
#include "btree_utils_num.h"
#include "catalog/pg_type.h"
#include "utils/builtins.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct inetkey

View File

@ -6,7 +6,6 @@
#include "btree_gist.h"
#include "btree_utils_num.h"
#include "common/int.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct int16key

View File

@ -5,7 +5,6 @@
#include "btree_gist.h"
#include "btree_utils_num.h"
#include "common/int.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct int32key

View File

@ -6,7 +6,6 @@
#include "btree_gist.h"
#include "btree_utils_num.h"
#include "common/int.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct int64key

View File

@ -6,7 +6,6 @@
#include "btree_gist.h"
#include "btree_utils_num.h"
#include "utils/fmgrprotos.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
#include "utils/timestamp.h"

View File

@ -7,7 +7,6 @@
#include "btree_utils_num.h"
#include "utils/fmgrprotos.h"
#include "utils/inet.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct

View File

@ -7,7 +7,6 @@
#include "btree_utils_num.h"
#include "utils/fmgrprotos.h"
#include "utils/inet.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct

View File

@ -192,7 +192,7 @@ gbt_numeric_penalty(PG_FUNCTION_ARGS)
*result = 0.0;
if (DatumGetBool(DirectFunctionCall2(numeric_gt, NumericGetDatum(ds), NumericGetDatum(nul))))
if (DirectFunctionCall2(numeric_gt, NumericGetDatum(ds), NumericGetDatum(nul)))
{
*result += FLT_MIN;
os = DatumGetNumeric(DirectFunctionCall2(numeric_div,

View File

@ -5,7 +5,6 @@
#include "btree_gist.h"
#include "btree_utils_num.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct

View File

@ -7,7 +7,6 @@
#include "btree_utils_num.h"
#include "utils/fmgrprotos.h"
#include "utils/date.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
#include "utils/timestamp.h"
@ -32,6 +31,13 @@ PG_FUNCTION_INFO_V1(gbt_time_sortsupport);
PG_FUNCTION_INFO_V1(gbt_timetz_sortsupport);
#ifdef USE_FLOAT8_BYVAL
#define TimeADTGetDatumFast(X) TimeADTGetDatum(X)
#else
#define TimeADTGetDatumFast(X) PointerGetDatum(&(X))
#endif
static bool
gbt_timegt(const void *a, const void *b, FmgrInfo *flinfo)
{
@ -39,8 +45,8 @@ gbt_timegt(const void *a, const void *b, FmgrInfo *flinfo)
const TimeADT *bb = (const TimeADT *) b;
return DatumGetBool(DirectFunctionCall2(time_gt,
TimeADTGetDatum(*aa),
TimeADTGetDatum(*bb)));
TimeADTGetDatumFast(*aa),
TimeADTGetDatumFast(*bb)));
}
static bool
@ -50,8 +56,8 @@ gbt_timege(const void *a, const void *b, FmgrInfo *flinfo)
const TimeADT *bb = (const TimeADT *) b;
return DatumGetBool(DirectFunctionCall2(time_ge,
TimeADTGetDatum(*aa),
TimeADTGetDatum(*bb)));
TimeADTGetDatumFast(*aa),
TimeADTGetDatumFast(*bb)));
}
static bool
@ -61,8 +67,8 @@ gbt_timeeq(const void *a, const void *b, FmgrInfo *flinfo)
const TimeADT *bb = (const TimeADT *) b;
return DatumGetBool(DirectFunctionCall2(time_eq,
TimeADTGetDatum(*aa),
TimeADTGetDatum(*bb)));
TimeADTGetDatumFast(*aa),
TimeADTGetDatumFast(*bb)));
}
static bool
@ -72,8 +78,8 @@ gbt_timele(const void *a, const void *b, FmgrInfo *flinfo)
const TimeADT *bb = (const TimeADT *) b;
return DatumGetBool(DirectFunctionCall2(time_le,
TimeADTGetDatum(*aa),
TimeADTGetDatum(*bb)));
TimeADTGetDatumFast(*aa),
TimeADTGetDatumFast(*bb)));
}
static bool
@ -83,8 +89,8 @@ gbt_timelt(const void *a, const void *b, FmgrInfo *flinfo)
const TimeADT *bb = (const TimeADT *) b;
return DatumGetBool(DirectFunctionCall2(time_lt,
TimeADTGetDatum(*aa),
TimeADTGetDatum(*bb)));
TimeADTGetDatumFast(*aa),
TimeADTGetDatumFast(*bb)));
}
static int
@ -94,9 +100,9 @@ gbt_timekey_cmp(const void *a, const void *b, FmgrInfo *flinfo)
timeKEY *ib = (timeKEY *) (((const Nsrt *) b)->t);
int res;
res = DatumGetInt32(DirectFunctionCall2(time_cmp, TimeADTGetDatum(ia->lower), TimeADTGetDatum(ib->lower)));
res = DatumGetInt32(DirectFunctionCall2(time_cmp, TimeADTGetDatumFast(ia->lower), TimeADTGetDatumFast(ib->lower)));
if (res == 0)
return DatumGetInt32(DirectFunctionCall2(time_cmp, TimeADTGetDatum(ia->upper), TimeADTGetDatum(ib->upper)));
return DatumGetInt32(DirectFunctionCall2(time_cmp, TimeADTGetDatumFast(ia->upper), TimeADTGetDatumFast(ib->upper)));
return res;
}
@ -109,8 +115,8 @@ gbt_time_dist(const void *a, const void *b, FmgrInfo *flinfo)
Interval *i;
i = DatumGetIntervalP(DirectFunctionCall2(time_mi_time,
TimeADTGetDatum(*aa),
TimeADTGetDatum(*bb)));
TimeADTGetDatumFast(*aa),
TimeADTGetDatumFast(*bb)));
return fabs(INTERVAL_TO_SEC(i));
}
@ -273,14 +279,14 @@ gbt_time_penalty(PG_FUNCTION_ARGS)
double res2;
intr = DatumGetIntervalP(DirectFunctionCall2(time_mi_time,
TimeADTGetDatum(newentry->upper),
TimeADTGetDatum(origentry->upper)));
TimeADTGetDatumFast(newentry->upper),
TimeADTGetDatumFast(origentry->upper)));
res = INTERVAL_TO_SEC(intr);
res = Max(res, 0);
intr = DatumGetIntervalP(DirectFunctionCall2(time_mi_time,
TimeADTGetDatum(origentry->lower),
TimeADTGetDatum(newentry->lower)));
TimeADTGetDatumFast(origentry->lower),
TimeADTGetDatumFast(newentry->lower)));
res2 = INTERVAL_TO_SEC(intr);
res2 = Max(res2, 0);
@ -291,8 +297,8 @@ gbt_time_penalty(PG_FUNCTION_ARGS)
if (res > 0)
{
intr = DatumGetIntervalP(DirectFunctionCall2(time_mi_time,
TimeADTGetDatum(origentry->upper),
TimeADTGetDatum(origentry->lower)));
TimeADTGetDatumFast(origentry->upper),
TimeADTGetDatumFast(origentry->lower)));
*result += FLT_MIN;
*result += (float) (res / (res + INTERVAL_TO_SEC(intr)));
*result *= (FLT_MAX / (((GISTENTRY *) PG_GETARG_POINTER(0))->rel->rd_att->natts + 1));
@ -328,8 +334,8 @@ gbt_timekey_ssup_cmp(Datum x, Datum y, SortSupport ssup)
/* for leaf items we expect lower == upper, so only compare lower */
return DatumGetInt32(DirectFunctionCall2(time_cmp,
TimeADTGetDatum(arg1->lower),
TimeADTGetDatum(arg2->lower)));
TimeADTGetDatumFast(arg1->lower),
TimeADTGetDatumFast(arg2->lower)));
}
Datum

View File

@ -10,7 +10,6 @@
#include "utils/fmgrprotos.h"
#include "utils/timestamp.h"
#include "utils/float.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
typedef struct
@ -34,6 +33,13 @@ PG_FUNCTION_INFO_V1(gbt_ts_same);
PG_FUNCTION_INFO_V1(gbt_ts_sortsupport);
#ifdef USE_FLOAT8_BYVAL
#define TimestampGetDatumFast(X) TimestampGetDatum(X)
#else
#define TimestampGetDatumFast(X) PointerGetDatum(&(X))
#endif
/* define for comparison */
static bool
@ -43,8 +49,8 @@ gbt_tsgt(const void *a, const void *b, FmgrInfo *flinfo)
const Timestamp *bb = (const Timestamp *) b;
return DatumGetBool(DirectFunctionCall2(timestamp_gt,
TimestampGetDatum(*aa),
TimestampGetDatum(*bb)));
TimestampGetDatumFast(*aa),
TimestampGetDatumFast(*bb)));
}
static bool
@ -54,8 +60,8 @@ gbt_tsge(const void *a, const void *b, FmgrInfo *flinfo)
const Timestamp *bb = (const Timestamp *) b;
return DatumGetBool(DirectFunctionCall2(timestamp_ge,
TimestampGetDatum(*aa),
TimestampGetDatum(*bb)));
TimestampGetDatumFast(*aa),
TimestampGetDatumFast(*bb)));
}
static bool
@ -65,8 +71,8 @@ gbt_tseq(const void *a, const void *b, FmgrInfo *flinfo)
const Timestamp *bb = (const Timestamp *) b;
return DatumGetBool(DirectFunctionCall2(timestamp_eq,
TimestampGetDatum(*aa),
TimestampGetDatum(*bb)));
TimestampGetDatumFast(*aa),
TimestampGetDatumFast(*bb)));
}
static bool
@ -76,8 +82,8 @@ gbt_tsle(const void *a, const void *b, FmgrInfo *flinfo)
const Timestamp *bb = (const Timestamp *) b;
return DatumGetBool(DirectFunctionCall2(timestamp_le,
TimestampGetDatum(*aa),
TimestampGetDatum(*bb)));
TimestampGetDatumFast(*aa),
TimestampGetDatumFast(*bb)));
}
static bool
@ -87,8 +93,8 @@ gbt_tslt(const void *a, const void *b, FmgrInfo *flinfo)
const Timestamp *bb = (const Timestamp *) b;
return DatumGetBool(DirectFunctionCall2(timestamp_lt,
TimestampGetDatum(*aa),
TimestampGetDatum(*bb)));
TimestampGetDatumFast(*aa),
TimestampGetDatumFast(*bb)));
}
static int
@ -98,9 +104,9 @@ gbt_tskey_cmp(const void *a, const void *b, FmgrInfo *flinfo)
tsKEY *ib = (tsKEY *) (((const Nsrt *) b)->t);
int res;
res = DatumGetInt32(DirectFunctionCall2(timestamp_cmp, TimestampGetDatum(ia->lower), TimestampGetDatum(ib->lower)));
res = DatumGetInt32(DirectFunctionCall2(timestamp_cmp, TimestampGetDatumFast(ia->lower), TimestampGetDatumFast(ib->lower)));
if (res == 0)
return DatumGetInt32(DirectFunctionCall2(timestamp_cmp, TimestampGetDatum(ia->upper), TimestampGetDatum(ib->upper)));
return DatumGetInt32(DirectFunctionCall2(timestamp_cmp, TimestampGetDatumFast(ia->upper), TimestampGetDatumFast(ib->upper)));
return res;
}
@ -116,8 +122,8 @@ gbt_ts_dist(const void *a, const void *b, FmgrInfo *flinfo)
return get_float8_infinity();
i = DatumGetIntervalP(DirectFunctionCall2(timestamp_mi,
TimestampGetDatum(*aa),
TimestampGetDatum(*bb)));
TimestampGetDatumFast(*aa),
TimestampGetDatumFast(*bb)));
return fabs(INTERVAL_TO_SEC(i));
}
@ -398,8 +404,8 @@ gbt_ts_ssup_cmp(Datum x, Datum y, SortSupport ssup)
/* for leaf items we expect lower == upper, so only compare lower */
return DatumGetInt32(DirectFunctionCall2(timestamp_cmp,
TimestampGetDatum(arg1->lower),
TimestampGetDatum(arg2->lower)));
TimestampGetDatumFast(arg1->lower),
TimestampGetDatumFast(arg2->lower)));
}
Datum

View File

@ -119,38 +119,38 @@ gbt_num_fetch(GISTENTRY *entry, const gbtree_ninfo *tinfo)
switch (tinfo->t)
{
case gbt_t_bool:
datum = BoolGetDatum(*(bool *) DatumGetPointer(entry->key));
datum = BoolGetDatum(*(bool *) entry->key);
break;
case gbt_t_int2:
datum = Int16GetDatum(*(int16 *) DatumGetPointer(entry->key));
datum = Int16GetDatum(*(int16 *) entry->key);
break;
case gbt_t_int4:
datum = Int32GetDatum(*(int32 *) DatumGetPointer(entry->key));
datum = Int32GetDatum(*(int32 *) entry->key);
break;
case gbt_t_int8:
datum = Int64GetDatum(*(int64 *) DatumGetPointer(entry->key));
datum = Int64GetDatum(*(int64 *) entry->key);
break;
case gbt_t_oid:
case gbt_t_enum:
datum = ObjectIdGetDatum(*(Oid *) DatumGetPointer(entry->key));
datum = ObjectIdGetDatum(*(Oid *) entry->key);
break;
case gbt_t_float4:
datum = Float4GetDatum(*(float4 *) DatumGetPointer(entry->key));
datum = Float4GetDatum(*(float4 *) entry->key);
break;
case gbt_t_float8:
datum = Float8GetDatum(*(float8 *) DatumGetPointer(entry->key));
datum = Float8GetDatum(*(float8 *) entry->key);
break;
case gbt_t_date:
datum = DateADTGetDatum(*(DateADT *) DatumGetPointer(entry->key));
datum = DateADTGetDatum(*(DateADT *) entry->key);
break;
case gbt_t_time:
datum = TimeADTGetDatum(*(TimeADT *) DatumGetPointer(entry->key));
datum = TimeADTGetDatum(*(TimeADT *) entry->key);
break;
case gbt_t_ts:
datum = TimestampGetDatum(*(Timestamp *) DatumGetPointer(entry->key));
datum = TimestampGetDatum(*(Timestamp *) entry->key);
break;
case gbt_t_cash:
datum = CashGetDatum(*(Cash *) DatumGetPointer(entry->key));
datum = CashGetDatum(*(Cash *) entry->key);
break;
default:
datum = entry->key;

View File

@ -11,7 +11,6 @@
#include "btree_utils_var.h"
#include "mb/pg_wchar.h"
#include "utils/rel.h"
#include "varatt.h"
/* used for key sorting */
typedef struct

View File

@ -6,7 +6,6 @@
#include "btree_gist.h"
#include "btree_utils_num.h"
#include "port/pg_bswap.h"
#include "utils/rel.h"
#include "utils/sortsupport.h"
#include "utils/uuid.h"

View File

@ -62,7 +62,10 @@ typedef struct NDBOX
/* for cubescan.l and cubeparse.y */
/* All grammar constructs return strings */
#define YYSTYPE char *
#ifndef YY_TYPEDEF_YY_SCANNER_T
#define YY_TYPEDEF_YY_SCANNER_T
typedef void *yyscan_t;
#endif
/* in cubescan.l */
extern int cube_yylex(YYSTYPE *yylval_param, yyscan_t yyscanner);

View File

@ -101,8 +101,8 @@ static void materializeQueryResult(FunctionCallInfo fcinfo,
const char *conname,
const char *sql,
bool fail);
static PGresult *storeQueryResult(storeInfo *sinfo, PGconn *conn, const char *sql);
static void storeRow(storeInfo *sinfo, PGresult *res, bool first);
static PGresult *storeQueryResult(volatile storeInfo *sinfo, PGconn *conn, const char *sql);
static void storeRow(volatile storeInfo *sinfo, PGresult *res, bool first);
static remoteConn *getConnectionByName(const char *name);
static HTAB *createConnHash(void);
static remoteConn *createNewConnection(const char *name);
@ -169,6 +169,14 @@ typedef struct remoteConnHashEnt
/* initial number of connection hashes */
#define NUMCONN 16
static char *
xpstrdup(const char *in)
{
if (in == NULL)
return NULL;
return pstrdup(in);
}
pg_noreturn static void
dblink_res_internalerror(PGconn *conn, PGresult *res, const char *p2)
{
@ -232,10 +240,6 @@ dblink_get_conn(char *conname_or_str,
errmsg("could not establish connection"),
errdetail_internal("%s", msg)));
}
PQsetNoticeReceiver(conn, libpqsrv_notice_receiver,
"received message via remote connection");
dblink_security_check(conn, NULL, connstr);
if (PQclientEncoding(conn) != GetDatabaseEncoding())
PQsetClientEncoding(conn, GetDatabaseEncodingName());
@ -334,9 +338,6 @@ dblink_connect(PG_FUNCTION_ARGS)
errdetail_internal("%s", msg)));
}
PQsetNoticeReceiver(conn, libpqsrv_notice_receiver,
"received message via remote connection");
/* check password actually used if not superuser */
dblink_security_check(conn, connname, connstr);
@ -862,123 +863,131 @@ static void
materializeResult(FunctionCallInfo fcinfo, PGconn *conn, PGresult *res)
{
ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
TupleDesc tupdesc;
bool is_sql_cmd;
int ntuples;
int nfields;
/* prepTuplestoreResult must have been called previously */
Assert(rsinfo->returnMode == SFRM_Materialize);
if (PQresultStatus(res) == PGRES_COMMAND_OK)
PG_TRY();
{
is_sql_cmd = true;
TupleDesc tupdesc;
bool is_sql_cmd;
int ntuples;
int nfields;
if (PQresultStatus(res) == PGRES_COMMAND_OK)
{
is_sql_cmd = true;
/*
* need a tuple descriptor representing one TEXT column to return
* the command status string as our result tuple
*/
tupdesc = CreateTemplateTupleDesc(1);
TupleDescInitEntry(tupdesc, (AttrNumber) 1, "status",
TEXTOID, -1, 0);
ntuples = 1;
nfields = 1;
}
else
{
Assert(PQresultStatus(res) == PGRES_TUPLES_OK);
is_sql_cmd = false;
/* get a tuple descriptor for our result type */
switch (get_call_result_type(fcinfo, NULL, &tupdesc))
{
case TYPEFUNC_COMPOSITE:
/* success */
break;
case TYPEFUNC_RECORD:
/* failed to determine actual type of RECORD */
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("function returning record called in context "
"that cannot accept type record")));
break;
default:
/* result type isn't composite */
elog(ERROR, "return type must be a row type");
break;
}
/* make sure we have a persistent copy of the tupdesc */
tupdesc = CreateTupleDescCopy(tupdesc);
ntuples = PQntuples(res);
nfields = PQnfields(res);
}
/*
* need a tuple descriptor representing one TEXT column to return the
* command status string as our result tuple
* check result and tuple descriptor have the same number of columns
*/
tupdesc = CreateTemplateTupleDesc(1);
TupleDescInitEntry(tupdesc, (AttrNumber) 1, "status",
TEXTOID, -1, 0);
ntuples = 1;
nfields = 1;
}
else
{
Assert(PQresultStatus(res) == PGRES_TUPLES_OK);
if (nfields != tupdesc->natts)
ereport(ERROR,
(errcode(ERRCODE_DATATYPE_MISMATCH),
errmsg("remote query result rowtype does not match "
"the specified FROM clause rowtype")));
is_sql_cmd = false;
/* get a tuple descriptor for our result type */
switch (get_call_result_type(fcinfo, NULL, &tupdesc))
if (ntuples > 0)
{
case TYPEFUNC_COMPOSITE:
/* success */
break;
case TYPEFUNC_RECORD:
/* failed to determine actual type of RECORD */
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("function returning record called in context "
"that cannot accept type record")));
break;
default:
/* result type isn't composite */
elog(ERROR, "return type must be a row type");
break;
}
AttInMetadata *attinmeta;
int nestlevel = -1;
Tuplestorestate *tupstore;
MemoryContext oldcontext;
int row;
char **values;
/* make sure we have a persistent copy of the tupdesc */
tupdesc = CreateTupleDescCopy(tupdesc);
ntuples = PQntuples(res);
nfields = PQnfields(res);
}
/*
* check result and tuple descriptor have the same number of columns
*/
if (nfields != tupdesc->natts)
ereport(ERROR,
(errcode(ERRCODE_DATATYPE_MISMATCH),
errmsg("remote query result rowtype does not match "
"the specified FROM clause rowtype")));
if (ntuples > 0)
{
AttInMetadata *attinmeta;
int nestlevel = -1;
Tuplestorestate *tupstore;
MemoryContext oldcontext;
int row;
char **values;
attinmeta = TupleDescGetAttInMetadata(tupdesc);
/* Set GUCs to ensure we read GUC-sensitive data types correctly */
if (!is_sql_cmd)
nestlevel = applyRemoteGucs(conn);
oldcontext = MemoryContextSwitchTo(rsinfo->econtext->ecxt_per_query_memory);
tupstore = tuplestore_begin_heap(true, false, work_mem);
rsinfo->setResult = tupstore;
rsinfo->setDesc = tupdesc;
MemoryContextSwitchTo(oldcontext);
values = palloc_array(char *, nfields);
/* put all tuples into the tuplestore */
for (row = 0; row < ntuples; row++)
{
HeapTuple tuple;
attinmeta = TupleDescGetAttInMetadata(tupdesc);
/* Set GUCs to ensure we read GUC-sensitive data types correctly */
if (!is_sql_cmd)
{
int i;
nestlevel = applyRemoteGucs(conn);
for (i = 0; i < nfields; i++)
oldcontext = MemoryContextSwitchTo(rsinfo->econtext->ecxt_per_query_memory);
tupstore = tuplestore_begin_heap(true, false, work_mem);
rsinfo->setResult = tupstore;
rsinfo->setDesc = tupdesc;
MemoryContextSwitchTo(oldcontext);
values = palloc_array(char *, nfields);
/* put all tuples into the tuplestore */
for (row = 0; row < ntuples; row++)
{
HeapTuple tuple;
if (!is_sql_cmd)
{
if (PQgetisnull(res, row, i))
values[i] = NULL;
else
values[i] = PQgetvalue(res, row, i);
int i;
for (i = 0; i < nfields; i++)
{
if (PQgetisnull(res, row, i))
values[i] = NULL;
else
values[i] = PQgetvalue(res, row, i);
}
}
}
else
{
values[0] = PQcmdStatus(res);
else
{
values[0] = PQcmdStatus(res);
}
/* build the tuple and put it into the tuplestore. */
tuple = BuildTupleFromCStrings(attinmeta, values);
tuplestore_puttuple(tupstore, tuple);
}
/* build the tuple and put it into the tuplestore. */
tuple = BuildTupleFromCStrings(attinmeta, values);
tuplestore_puttuple(tupstore, tuple);
/* clean up GUC settings, if we changed any */
restoreLocalGucs(nestlevel);
}
/* clean up GUC settings, if we changed any */
restoreLocalGucs(nestlevel);
}
PQclear(res);
PG_FINALLY();
{
/* be sure to release the libpq result */
PQclear(res);
}
PG_END_TRY();
}
/*
@ -997,17 +1006,16 @@ materializeQueryResult(FunctionCallInfo fcinfo,
bool fail)
{
ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;
PGresult *volatile res = NULL;
volatile storeInfo sinfo = {0};
/* prepTuplestoreResult must have been called previously */
Assert(rsinfo->returnMode == SFRM_Materialize);
/* Use a PG_TRY block to ensure we pump libpq dry of results */
sinfo.fcinfo = fcinfo;
PG_TRY();
{
storeInfo sinfo = {0};
PGresult *res;
sinfo.fcinfo = fcinfo;
/* Create short-lived memory context for data conversions */
sinfo.tmpcontext = AllocSetContextCreate(CurrentMemoryContext,
"dblink temporary context",
@ -1020,7 +1028,14 @@ materializeQueryResult(FunctionCallInfo fcinfo,
(PQresultStatus(res) != PGRES_COMMAND_OK &&
PQresultStatus(res) != PGRES_TUPLES_OK))
{
dblink_res_error(conn, conname, res, fail,
/*
* dblink_res_error will clear the passed PGresult, so we need
* this ugly dance to avoid doing so twice during error exit
*/
PGresult *res1 = res;
res = NULL;
dblink_res_error(conn, conname, res1, fail,
"while executing query");
/* if fail isn't set, we'll return an empty query result */
}
@ -1059,6 +1074,7 @@ materializeQueryResult(FunctionCallInfo fcinfo,
tuplestore_puttuple(tupstore, tuple);
PQclear(res);
res = NULL;
}
else
{
@ -1067,20 +1083,26 @@ materializeQueryResult(FunctionCallInfo fcinfo,
Assert(rsinfo->setResult != NULL);
PQclear(res);
res = NULL;
}
/* clean up data conversion short-lived memory context */
if (sinfo.tmpcontext != NULL)
MemoryContextDelete(sinfo.tmpcontext);
sinfo.tmpcontext = NULL;
PQclear(sinfo.last_res);
sinfo.last_res = NULL;
PQclear(sinfo.cur_res);
sinfo.cur_res = NULL;
}
PG_CATCH();
{
PGresult *res;
/* be sure to clear out any pending data in libpq */
/* be sure to release any libpq result we collected */
PQclear(res);
PQclear(sinfo.last_res);
PQclear(sinfo.cur_res);
/* and clear out any pending data in libpq */
while ((res = libpqsrv_get_result(conn, dblink_we_get_result)) !=
NULL)
PQclear(res);
@ -1093,7 +1115,7 @@ materializeQueryResult(FunctionCallInfo fcinfo,
* Execute query, and send any result rows to sinfo->tuplestore.
*/
static PGresult *
storeQueryResult(storeInfo *sinfo, PGconn *conn, const char *sql)
storeQueryResult(volatile storeInfo *sinfo, PGconn *conn, const char *sql)
{
bool first = true;
int nestlevel = -1;
@ -1161,7 +1183,7 @@ storeQueryResult(storeInfo *sinfo, PGconn *conn, const char *sql)
* (in this case the PGresult might contain either zero or one row).
*/
static void
storeRow(storeInfo *sinfo, PGresult *res, bool first)
storeRow(volatile storeInfo *sinfo, PGresult *res, bool first)
{
int nfields = PQnfields(res);
HeapTuple tuple;
@ -2766,13 +2788,10 @@ dblink_connstr_check(const char *connstr)
/*
* Report an error received from the remote server
*
* res: the received error result
* res: the received error result (will be freed)
* fail: true for ERROR ereport, false for NOTICE
* fmt and following args: sprintf-style format and values for errcontext;
* the resulting string should be worded like "while <some action>"
*
* If "res" is not NULL, it'll be PQclear'ed here (unless we throw error,
* in which case memory context cleanup will clear it eventually).
*/
static void
dblink_res_error(PGconn *conn, const char *conname, PGresult *res,
@ -2780,11 +2799,15 @@ dblink_res_error(PGconn *conn, const char *conname, PGresult *res,
{
int level;
char *pg_diag_sqlstate = PQresultErrorField(res, PG_DIAG_SQLSTATE);
char *message_primary = PQresultErrorField(res, PG_DIAG_MESSAGE_PRIMARY);
char *message_detail = PQresultErrorField(res, PG_DIAG_MESSAGE_DETAIL);
char *message_hint = PQresultErrorField(res, PG_DIAG_MESSAGE_HINT);
char *message_context = PQresultErrorField(res, PG_DIAG_CONTEXT);
char *pg_diag_message_primary = PQresultErrorField(res, PG_DIAG_MESSAGE_PRIMARY);
char *pg_diag_message_detail = PQresultErrorField(res, PG_DIAG_MESSAGE_DETAIL);
char *pg_diag_message_hint = PQresultErrorField(res, PG_DIAG_MESSAGE_HINT);
char *pg_diag_context = PQresultErrorField(res, PG_DIAG_CONTEXT);
int sqlstate;
char *message_primary;
char *message_detail;
char *message_hint;
char *message_context;
va_list ap;
char dblink_context_msg[512];
@ -2802,6 +2825,11 @@ dblink_res_error(PGconn *conn, const char *conname, PGresult *res,
else
sqlstate = ERRCODE_CONNECTION_FAILURE;
message_primary = xpstrdup(pg_diag_message_primary);
message_detail = xpstrdup(pg_diag_message_detail);
message_hint = xpstrdup(pg_diag_message_hint);
message_context = xpstrdup(pg_diag_context);
/*
* If we don't get a message from the PGresult, try the PGconn. This is
* needed because for connection-level failures, PQgetResult may just
@ -2810,6 +2838,14 @@ dblink_res_error(PGconn *conn, const char *conname, PGresult *res,
if (message_primary == NULL)
message_primary = pchomp(PQerrorMessage(conn));
/*
* Now that we've copied all the data we need out of the PGresult, it's
* safe to free it. We must do this to avoid PGresult leakage. We're
* leaking all the strings too, but those are in palloc'd memory that will
* get cleaned up eventually.
*/
PQclear(res);
/*
* Format the basic errcontext string. Below, we'll add on something
* about the connection name. That's a violation of the translatability
@ -2834,7 +2870,6 @@ dblink_res_error(PGconn *conn, const char *conname, PGresult *res,
dblink_context_msg, conname)) :
(errcontext("%s on unnamed dblink connection",
dblink_context_msg))));
PQclear(res);
}
/*

View File

@ -34,7 +34,7 @@ tests += {
'sql': [
'dblink',
],
'regress_args': ['--dlpath', meson.project_build_root() / 'src/test/regress'],
'regress_args': ['--dlpath', meson.build_root() / 'src/test/regress'],
},
'tap': {
'tests': [

View File

@ -322,7 +322,6 @@ SET constraint_exclusion = 'on';
SELECT explain_filter('EXPLAIN (VERBOSE, COSTS FALSE) SELECT * FROM agg_csv WHERE a < 0');
Result
Output: a, b
Replaces: Scan on agg_csv
One-Time Filter: false
\t off

View File

@ -127,7 +127,7 @@ gin_extract_hstore_query(PG_FUNCTION_ARGS)
/* Nulls in the array are ignored, cf hstoreArrayToPairs */
if (key_nulls[i])
continue;
item = makeitem(VARDATA(DatumGetPointer(key_datums[i])), VARSIZE(DatumGetPointer(key_datums[i])) - VARHDRSZ, KEYFLAG);
item = makeitem(VARDATA(key_datums[i]), VARSIZE(key_datums[i]) - VARHDRSZ, KEYFLAG);
entries[j++] = PointerGetDatum(item);
}

View File

@ -576,7 +576,7 @@ ghstore_consistent(PG_FUNCTION_ARGS)
if (key_nulls[i])
continue;
crc = crc32_sz(VARDATA(DatumGetPointer(key_datums[i])), VARSIZE(DatumGetPointer(key_datums[i])) - VARHDRSZ);
crc = crc32_sz(VARDATA(key_datums[i]), VARSIZE(key_datums[i]) - VARHDRSZ);
if (!(GETBIT(sign, HASHVAL(crc, siglen))))
res = false;
}
@ -599,7 +599,7 @@ ghstore_consistent(PG_FUNCTION_ARGS)
if (key_nulls[i])
continue;
crc = crc32_sz(VARDATA(DatumGetPointer(key_datums[i])), VARSIZE(DatumGetPointer(key_datums[i])) - VARHDRSZ);
crc = crc32_sz(VARDATA(key_datums[i]), VARSIZE(key_datums[i]) - VARHDRSZ);
if (GETBIT(sign, HASHVAL(crc, siglen)))
res = true;
}

View File

@ -684,22 +684,22 @@ hstore_from_arrays(PG_FUNCTION_ARGS)
if (!value_nulls || value_nulls[i])
{
pairs[i].key = VARDATA(DatumGetPointer(key_datums[i]));
pairs[i].key = VARDATA(key_datums[i]);
pairs[i].val = NULL;
pairs[i].keylen =
hstoreCheckKeyLen(VARSIZE(DatumGetPointer(key_datums[i])) - VARHDRSZ);
hstoreCheckKeyLen(VARSIZE(key_datums[i]) - VARHDRSZ);
pairs[i].vallen = 4;
pairs[i].isnull = true;
pairs[i].needfree = false;
}
else
{
pairs[i].key = VARDATA(DatumGetPointer(key_datums[i]));
pairs[i].val = VARDATA(DatumGetPointer(value_datums[i]));
pairs[i].key = VARDATA(key_datums[i]);
pairs[i].val = VARDATA(value_datums[i]);
pairs[i].keylen =
hstoreCheckKeyLen(VARSIZE(DatumGetPointer(key_datums[i])) - VARHDRSZ);
hstoreCheckKeyLen(VARSIZE(key_datums[i]) - VARHDRSZ);
pairs[i].vallen =
hstoreCheckValLen(VARSIZE(DatumGetPointer(value_datums[i])) - VARHDRSZ);
hstoreCheckValLen(VARSIZE(value_datums[i]) - VARHDRSZ);
pairs[i].isnull = false;
pairs[i].needfree = false;
}
@ -778,22 +778,22 @@ hstore_from_array(PG_FUNCTION_ARGS)
if (in_nulls[i * 2 + 1])
{
pairs[i].key = VARDATA(DatumGetPointer(in_datums[i * 2]));
pairs[i].key = VARDATA(in_datums[i * 2]);
pairs[i].val = NULL;
pairs[i].keylen =
hstoreCheckKeyLen(VARSIZE(DatumGetPointer(in_datums[i * 2])) - VARHDRSZ);
hstoreCheckKeyLen(VARSIZE(in_datums[i * 2]) - VARHDRSZ);
pairs[i].vallen = 4;
pairs[i].isnull = true;
pairs[i].needfree = false;
}
else
{
pairs[i].key = VARDATA(DatumGetPointer(in_datums[i * 2]));
pairs[i].val = VARDATA(DatumGetPointer(in_datums[i * 2 + 1]));
pairs[i].key = VARDATA(in_datums[i * 2]);
pairs[i].val = VARDATA(in_datums[i * 2 + 1]);
pairs[i].keylen =
hstoreCheckKeyLen(VARSIZE(DatumGetPointer(in_datums[i * 2])) - VARHDRSZ);
hstoreCheckKeyLen(VARSIZE(in_datums[i * 2]) - VARHDRSZ);
pairs[i].vallen =
hstoreCheckValLen(VARSIZE(DatumGetPointer(in_datums[i * 2 + 1])) - VARHDRSZ);
hstoreCheckValLen(VARSIZE(in_datums[i * 2 + 1]) - VARHDRSZ);
pairs[i].isnull = false;
pairs[i].needfree = false;
}

View File

@ -107,8 +107,8 @@ hstoreArrayToPairs(ArrayType *a, int *npairs)
{
if (!key_nulls[i])
{
key_pairs[j].key = VARDATA(DatumGetPointer(key_datums[i]));
key_pairs[j].keylen = VARSIZE(DatumGetPointer(key_datums[i])) - VARHDRSZ;
key_pairs[j].key = VARDATA(key_datums[i]);
key_pairs[j].keylen = VARSIZE(key_datums[i]) - VARHDRSZ;
key_pairs[j].val = NULL;
key_pairs[j].vallen = 0;
key_pairs[j].needfree = 0;

View File

@ -108,7 +108,7 @@ _int_overlap(PG_FUNCTION_ARGS)
CHECKARRVALID(a);
CHECKARRVALID(b);
if (ARRISEMPTY(a) || ARRISEMPTY(b))
PG_RETURN_BOOL(false);
return false;
SORT(a);
SORT(b);

View File

@ -210,8 +210,8 @@ _int_matchsel(PG_FUNCTION_ARGS)
*/
if (sslot.nnumbers == sslot.nvalues + 3)
{
/* Grab the minimal MCE frequency. */
minfreq = sslot.numbers[sslot.nvalues];
/* Grab the lowest frequency. */
minfreq = sslot.numbers[sslot.nnumbers - (sslot.nnumbers - sslot.nvalues)];
mcelems = sslot.values;
mcefreqs = sslot.numbers;
@ -269,11 +269,8 @@ int_query_opr_selec(ITEM *item, Datum *mcelems, float4 *mcefreqs,
else
{
/*
* The element is not in MCELEM. Estimate its frequency as half
* that of the least-frequent MCE. (We know it cannot be more
* than minfreq, and it could be a great deal less. Half seems
* like a good compromise.) For probably-historical reasons,
* clamp to not more than DEFAULT_EQ_SEL.
* The element is not in MCELEM. Punt, but assume that the
* selectivity cannot be more than minfreq / 2.
*/
selec = Min(DEFAULT_EQ_SEL, minfreq / 2);
}

View File

@ -1,5 +1,5 @@
/*
* UPC.h
* ISSN.h
* PostgreSQL type definitions for ISNs (ISBN, ISMN, ISSN, EAN13, UPC)
*
* No information available for UPC prefixes

View File

@ -726,7 +726,7 @@ string2ean(const char *str, struct Node *escontext, ean13 *result,
if (type != INVALID)
goto eaninvalid;
type = ISSN;
*aux1++ = pg_ascii_toupper((unsigned char) *aux2);
*aux1++ = toupper((unsigned char) *aux2);
length++;
}
else if (length == 9 && (digit || *aux2 == 'X' || *aux2 == 'x') && last)
@ -736,7 +736,7 @@ string2ean(const char *str, struct Node *escontext, ean13 *result,
goto eaninvalid;
if (type == INVALID)
type = ISBN; /* ISMN must start with 'M' */
*aux1++ = pg_ascii_toupper((unsigned char) *aux2);
*aux1++ = toupper((unsigned char) *aux2);
length++;
}
else if (length == 11 && digit && last)

View File

@ -84,7 +84,7 @@ _ltree_compress(PG_FUNCTION_ARGS)
entry->rel, entry->page,
entry->offset, false);
}
else if (!LTG_ISALLTRUE(DatumGetPointer(entry->key)))
else if (!LTG_ISALLTRUE(entry->key))
{
int32 i;
ltree_gist *key;

View File

@ -506,7 +506,7 @@ bt_page_print_tuples(ua_page_items *uargs)
j = 0;
memset(nulls, 0, sizeof(nulls));
values[j++] = Int16GetDatum(offset);
values[j++] = DatumGetInt16(offset);
values[j++] = ItemPointerGetDatum(&itup->t_tid);
values[j++] = Int32GetDatum((int) IndexTupleSize(itup));
values[j++] = BoolGetDatum(IndexTupleHasNulls(itup));

View File

@ -5,21 +5,21 @@ CREATE UNLOGGED TABLE test_gist AS SELECT point(i,i) p, i::text t FROM
CREATE INDEX test_gist_idx ON test_gist USING gist (p);
-- Page 0 is the root, the rest are leaf pages
SELECT * FROM gist_page_opaque_info(get_raw_page('test_gist_idx', 0));
lsn | nsn | rightlink | flags
------------+------------+------------+-------
0/00000001 | 0/00000000 | 4294967295 | {}
lsn | nsn | rightlink | flags
-----+-----+------------+-------
0/1 | 0/0 | 4294967295 | {}
(1 row)
SELECT * FROM gist_page_opaque_info(get_raw_page('test_gist_idx', 1));
lsn | nsn | rightlink | flags
------------+------------+------------+--------
0/00000001 | 0/00000000 | 4294967295 | {leaf}
lsn | nsn | rightlink | flags
-----+-----+------------+--------
0/1 | 0/0 | 4294967295 | {leaf}
(1 row)
SELECT * FROM gist_page_opaque_info(get_raw_page('test_gist_idx', 2));
lsn | nsn | rightlink | flags
------------+------------+-----------+--------
0/00000001 | 0/00000000 | 1 | {leaf}
lsn | nsn | rightlink | flags
-----+-----+-----------+--------
0/1 | 0/0 | 1 | {leaf}
(1 row)
SELECT * FROM gist_page_items(get_raw_page('test_gist_idx', 0), 'test_gist_idx');

View File

@ -265,9 +265,9 @@ SELECT fsm_page_contents(decode(repeat('00', :block_size), 'hex'));
(1 row)
SELECT page_header(decode(repeat('00', :block_size), 'hex'));
page_header
------------------------------
(0/00000000,0,0,0,0,0,0,0,0)
page_header
-----------------------
(0/0,0,0,0,0,0,0,0,0)
(1 row)
SELECT page_checksum(decode(repeat('00', :block_size), 'hex'), 1);

View File

@ -174,7 +174,7 @@ gist_page_items_bytea(PG_FUNCTION_ARGS)
memset(nulls, 0, sizeof(nulls));
values[0] = Int16GetDatum(offset);
values[0] = DatumGetInt16(offset);
values[1] = ItemPointerGetDatum(&itup->t_tid);
values[2] = Int32GetDatum((int) IndexTupleSize(itup));
@ -281,7 +281,7 @@ gist_page_items(PG_FUNCTION_ARGS)
memset(nulls, 0, sizeof(nulls));
values[0] = Int16GetDatum(offset);
values[0] = DatumGetInt16(offset);
values[1] = ItemPointerGetDatum(&itup->t_tid);
values[2] = Int32GetDatum((int) IndexTupleSize(itup));
values[3] = BoolGetDatum(ItemIdIsDead(id));

View File

@ -256,7 +256,7 @@ heap_page_items(PG_FUNCTION_ARGS)
nulls[11] = true;
if (tuphdr->t_infomask & HEAP_HASOID_OLD)
values[12] = ObjectIdGetDatum(HeapTupleHeaderGetOidOld(tuphdr));
values[12] = HeapTupleHeaderGetOidOld(tuphdr);
else
nulls[12] = true;

View File

@ -282,7 +282,7 @@ page_header(PG_FUNCTION_ARGS)
{
char lsnchar[64];
snprintf(lsnchar, sizeof(lsnchar), "%X/%08X", LSN_FORMAT_ARGS(lsn));
snprintf(lsnchar, sizeof(lsnchar), "%X/%X", LSN_FORMAT_ARGS(lsn));
values[0] = CStringGetTextDatum(lsnchar);
}
else

View File

@ -194,8 +194,6 @@ pg_buffercache_pages(PG_FUNCTION_ARGS)
BufferDesc *bufHdr;
uint32 buf_state;
CHECK_FOR_INTERRUPTS();
bufHdr = GetBufferDescriptor(i);
/* Lock each buffer header before inspecting. */
buf_state = LockBufHdr(bufHdr);
@ -562,8 +560,6 @@ pg_buffercache_summary(PG_FUNCTION_ARGS)
BufferDesc *bufHdr;
uint32 buf_state;
CHECK_FOR_INTERRUPTS();
/*
* This function summarizes the state of all headers. Locking the
* buffer headers wouldn't provide an improved result as the state of
@ -624,8 +620,6 @@ pg_buffercache_usage_counts(PG_FUNCTION_ARGS)
uint32 buf_state = pg_atomic_read_u32(&bufHdr->state);
int usage_count;
CHECK_FOR_INTERRUPTS();
usage_count = BUF_STATE_GET_USAGECOUNT(buf_state);
usage_counts[usage_count]++;

View File

@ -44,10 +44,9 @@ EXPLAIN (RANGE_TABLE) SELECT 1;
QUERY PLAN
------------------------------------------
Result (cost=0.00..0.01 rows=1 width=4)
RTIs: 1
RTI 1 (result):
Eref: "*RESULT*" ()
(4 rows)
(3 rows)
-- Create a partitioned table.
CREATE TABLE vegetables (id serial, name text, genus text)
@ -476,7 +475,6 @@ INSERT INTO vegetables (name, genus) VALUES ('broccoflower', 'brassica');
Nominal RTI: 1
Exclude Relation RTI: 0
-> Result
RTIs: 2
RTI 1 (relation):
Eref: vegetables (id, name, genus)
Relation: vegetables
@ -487,5 +485,5 @@ INSERT INTO vegetables (name, genus) VALUES ('broccoflower', 'brassica');
Eref: "*RESULT*" ()
Unprunable RTIs: 1
Result RTIs: 1
(15 rows)
(14 rows)

View File

@ -236,18 +236,6 @@ overexplain_per_node_hook(PlanState *planstate, List *ancestors,
((MergeAppend *) plan)->apprelids,
es);
break;
case T_Result:
/*
* 'relids' is only meaningful when plan->lefttree is NULL,
* but if somehow it ends up set when plan->lefttree is not
* NULL, print it anyway.
*/
if (plan->lefttree == NULL ||
((Result *) plan)->relids != NULL)
overexplain_bitmapset("RTIs",
((Result *) plan)->relids,
es);
default:
break;
}

View File

@ -370,15 +370,6 @@ apw_load_buffers(void)
apw_state->prewarm_start_idx = apw_state->prewarm_stop_idx = 0;
apw_state->prewarmed_blocks = 0;
/* Don't prewarm more than we can fit. */
if (num_elements > NBuffers)
{
num_elements = NBuffers;
ereport(LOG,
(errmsg("autoprewarm capping prewarmed blocks to %d (shared_buffers size)",
NBuffers)));
}
/* Get the info position of the first block of the next database. */
while (apw_state->prewarm_start_idx < num_elements)
{
@ -419,6 +410,10 @@ apw_load_buffers(void)
apw_state->database = current_db;
Assert(apw_state->prewarm_start_idx < apw_state->prewarm_stop_idx);
/* If we've run out of free buffers, don't launch another worker. */
if (!have_free_buffer())
break;
/*
* Likewise, don't launch if we've already been told to shut down.
* (The launch would fail anyway, but we might as well skip it.)
@ -467,6 +462,12 @@ apw_read_stream_next_block(ReadStream *stream,
{
BlockInfoRecord blk = p->block_info[p->pos];
if (!have_free_buffer())
{
p->pos = apw_state->prewarm_stop_idx;
return InvalidBlockNumber;
}
if (blk.tablespace != p->tablespace)
return InvalidBlockNumber;
@ -522,10 +523,10 @@ autoprewarm_database_main(Datum main_arg)
blk = block_info[i];
/*
* Loop until we run out of blocks to prewarm or until we run out of
* Loop until we run out of blocks to prewarm or until we run out of free
* buffers.
*/
while (i < apw_state->prewarm_stop_idx)
while (i < apw_state->prewarm_stop_idx && have_free_buffer())
{
Oid tablespace = blk.tablespace;
RelFileNumber filenumber = blk.filenumber;
@ -567,13 +568,14 @@ autoprewarm_database_main(Datum main_arg)
/*
* We have a relation; now let's loop until we find a valid fork of
* the relation or we run out of buffers. Once we've read from all
* valid forks or run out of options, we'll close the relation and
* the relation or we run out of free buffers. Once we've read from
* all valid forks or run out of options, we'll close the relation and
* move on.
*/
while (i < apw_state->prewarm_stop_idx &&
blk.tablespace == tablespace &&
blk.filenumber == filenumber)
blk.filenumber == filenumber &&
have_free_buffer())
{
ForkNumber forknum = blk.forknum;
BlockNumber nblocks;
@ -862,7 +864,7 @@ apw_init_state(void *ptr)
{
AutoPrewarmSharedState *state = (AutoPrewarmSharedState *) ptr;
LWLockInitialize(&state->lock, LWLockNewTrancheId("autoprewarm"));
LWLockInitialize(&state->lock, LWLockNewTrancheId());
state->bgworker_pid = InvalidPid;
state->pid_using_dumpfile = InvalidPid;
}
@ -881,6 +883,7 @@ apw_init_shmem(void)
sizeof(AutoPrewarmSharedState),
apw_init_state,
&found);
LWLockRegisterTranche(apw_state->lock.tranche, "autoprewarm");
return found;
}

View File

@ -7,7 +7,6 @@ OBJS = \
EXTENSION = pg_stat_statements
DATA = pg_stat_statements--1.4.sql \
pg_stat_statements--1.12--1.13.sql \
pg_stat_statements--1.11--1.12.sql pg_stat_statements--1.10--1.11.sql \
pg_stat_statements--1.9--1.10.sql pg_stat_statements--1.8--1.9.sql \
pg_stat_statements--1.7--1.8.sql pg_stat_statements--1.6--1.7.sql \
@ -21,7 +20,7 @@ LDFLAGS_SL += $(filter -lm, $(LIBS))
REGRESS_OPTS = --temp-config $(top_srcdir)/contrib/pg_stat_statements/pg_stat_statements.conf
REGRESS = select dml cursors utility level_tracking planning \
user_activity wal entry_timestamp privileges extended \
parallel plancache cleanup oldextversions squashing
parallel cleanup oldextversions squashing
# Disabled because these tests require "shared_preload_libraries=pg_stat_statements",
# which typical installcheck users do not have (e.g. buildfarm clients).
NO_INSTALLCHECK = 1

View File

@ -57,8 +57,8 @@ SELECT calls, rows, query FROM pg_stat_statements ORDER BY query COLLATE "C";
1 | 0 | COMMIT
1 | 0 | DECLARE cursor_stats_1 CURSOR WITH HOLD FOR SELECT $1
1 | 0 | DECLARE cursor_stats_2 CURSOR WITH HOLD FOR SELECT $1
1 | 1 | FETCH $1 IN cursor_stats_1
1 | 1 | FETCH $1 IN cursor_stats_2
1 | 1 | FETCH 1 IN cursor_stats_1
1 | 1 | FETCH 1 IN cursor_stats_2
1 | 1 | SELECT pg_stat_statements_reset() IS NOT NULL AS t
(9 rows)
@ -68,140 +68,3 @@ SELECT pg_stat_statements_reset() IS NOT NULL AS t;
t
(1 row)
-- Normalization of FETCH statements
BEGIN;
DECLARE pgss_cursor CURSOR FOR SELECT FROM generate_series(1, 10);
-- implicit directions
FETCH pgss_cursor;
--
(1 row)
FETCH 1 pgss_cursor;
--
(1 row)
FETCH 2 pgss_cursor;
--
(2 rows)
FETCH -1 pgss_cursor;
--
(1 row)
-- explicit NEXT
FETCH NEXT pgss_cursor;
--
(1 row)
-- explicit PRIOR
FETCH PRIOR pgss_cursor;
--
(1 row)
-- explicit FIRST
FETCH FIRST pgss_cursor;
--
(1 row)
-- explicit LAST
FETCH LAST pgss_cursor;
--
(1 row)
-- explicit ABSOLUTE
FETCH ABSOLUTE 1 pgss_cursor;
--
(1 row)
FETCH ABSOLUTE 2 pgss_cursor;
--
(1 row)
FETCH ABSOLUTE -1 pgss_cursor;
--
(1 row)
-- explicit RELATIVE
FETCH RELATIVE 1 pgss_cursor;
--
(0 rows)
FETCH RELATIVE 2 pgss_cursor;
--
(0 rows)
FETCH RELATIVE -1 pgss_cursor;
--
(1 row)
-- explicit FORWARD
FETCH ALL pgss_cursor;
--
(0 rows)
-- explicit FORWARD ALL
FETCH FORWARD ALL pgss_cursor;
--
(0 rows)
-- explicit FETCH FORWARD
FETCH FORWARD pgss_cursor;
--
(0 rows)
FETCH FORWARD 1 pgss_cursor;
--
(0 rows)
FETCH FORWARD 2 pgss_cursor;
--
(0 rows)
FETCH FORWARD -1 pgss_cursor;
--
(1 row)
-- explicit FETCH BACKWARD
FETCH BACKWARD pgss_cursor;
--
(1 row)
FETCH BACKWARD 1 pgss_cursor;
--
(1 row)
FETCH BACKWARD 2 pgss_cursor;
--
(2 rows)
FETCH BACKWARD -1 pgss_cursor;
--
(1 row)
-- explicit BACKWARD ALL
FETCH BACKWARD ALL pgss_cursor;
--
(6 rows)
COMMIT;
SELECT calls, query FROM pg_stat_statements ORDER BY query COLLATE "C";
calls | query
-------+--------------------------------------------------------------------
1 | BEGIN
1 | COMMIT
1 | DECLARE pgss_cursor CURSOR FOR SELECT FROM generate_series($1, $2)
3 | FETCH ABSOLUTE $1 pgss_cursor
1 | FETCH ALL pgss_cursor
1 | FETCH BACKWARD ALL pgss_cursor
4 | FETCH BACKWARD pgss_cursor
1 | FETCH FIRST pgss_cursor
1 | FETCH FORWARD ALL pgss_cursor
4 | FETCH FORWARD pgss_cursor
1 | FETCH LAST pgss_cursor
1 | FETCH NEXT pgss_cursor
1 | FETCH PRIOR pgss_cursor
3 | FETCH RELATIVE $1 pgss_cursor
4 | FETCH pgss_cursor
1 | SELECT pg_stat_statements_reset() IS NOT NULL AS t
(16 rows)

View File

@ -1147,7 +1147,7 @@ SELECT toplevel, calls, query FROM pg_stat_statements
t | 1 | COMMIT
t | 1 | DECLARE FOOCUR CURSOR FOR SELECT * from stats_track_tab
f | 1 | DECLARE FOOCUR CURSOR FOR SELECT * from stats_track_tab;
t | 1 | FETCH FORWARD $1 FROM foocur
t | 1 | FETCH FORWARD 1 FROM foocur
t | 1 | SELECT pg_stat_statements_reset() IS NOT NULL AS t
(7 rows)
@ -1176,7 +1176,7 @@ SELECT toplevel, calls, query FROM pg_stat_statements
t | 1 | CLOSE foocur
t | 1 | COMMIT
t | 1 | DECLARE FOOCUR CURSOR FOR SELECT * FROM stats_track_tab
t | 1 | FETCH FORWARD $1 FROM foocur
t | 1 | FETCH FORWARD 1 FROM foocur
t | 1 | SELECT pg_stat_statements_reset() IS NOT NULL AS t
(6 rows)

View File

@ -407,71 +407,4 @@ SELECT count(*) > 0 AS has_data FROM pg_stat_statements;
t
(1 row)
-- New functions and views for pg_stat_statements in 1.13
AlTER EXTENSION pg_stat_statements UPDATE TO '1.13';
\d pg_stat_statements
View "public.pg_stat_statements"
Column | Type | Collation | Nullable | Default
----------------------------+--------------------------+-----------+----------+---------
userid | oid | | |
dbid | oid | | |
toplevel | boolean | | |
queryid | bigint | | |
query | text | | |
plans | bigint | | |
total_plan_time | double precision | | |
min_plan_time | double precision | | |
max_plan_time | double precision | | |
mean_plan_time | double precision | | |
stddev_plan_time | double precision | | |
calls | bigint | | |
total_exec_time | double precision | | |
min_exec_time | double precision | | |
max_exec_time | double precision | | |
mean_exec_time | double precision | | |
stddev_exec_time | double precision | | |
rows | bigint | | |
shared_blks_hit | bigint | | |
shared_blks_read | bigint | | |
shared_blks_dirtied | bigint | | |
shared_blks_written | bigint | | |
local_blks_hit | bigint | | |
local_blks_read | bigint | | |
local_blks_dirtied | bigint | | |
local_blks_written | bigint | | |
temp_blks_read | bigint | | |
temp_blks_written | bigint | | |
shared_blk_read_time | double precision | | |
shared_blk_write_time | double precision | | |
local_blk_read_time | double precision | | |
local_blk_write_time | double precision | | |
temp_blk_read_time | double precision | | |
temp_blk_write_time | double precision | | |
wal_records | bigint | | |
wal_fpi | bigint | | |
wal_bytes | numeric | | |
wal_buffers_full | bigint | | |
jit_functions | bigint | | |
jit_generation_time | double precision | | |
jit_inlining_count | bigint | | |
jit_inlining_time | double precision | | |
jit_optimization_count | bigint | | |
jit_optimization_time | double precision | | |
jit_emission_count | bigint | | |
jit_emission_time | double precision | | |
jit_deform_count | bigint | | |
jit_deform_time | double precision | | |
parallel_workers_to_launch | bigint | | |
parallel_workers_launched | bigint | | |
generic_plan_calls | bigint | | |
custom_plan_calls | bigint | | |
stats_since | timestamp with time zone | | |
minmax_stats_since | timestamp with time zone | | |
SELECT count(*) > 0 AS has_data FROM pg_stat_statements;
has_data
----------
t
(1 row)
DROP EXTENSION pg_stat_statements;

View File

@ -1,224 +0,0 @@
--
-- Tests with plan cache
--
-- Setup
CREATE OR REPLACE FUNCTION select_one_func(int) RETURNS VOID AS $$
DECLARE
ret INT;
BEGIN
SELECT $1 INTO ret;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE select_one_proc(int) AS $$
DECLARE
ret INT;
BEGIN
SELECT $1 INTO ret;
END;
$$ LANGUAGE plpgsql;
-- Prepared statements
SELECT pg_stat_statements_reset() IS NOT NULL AS t;
t
---
t
(1 row)
PREPARE p1 AS SELECT $1 AS a;
SET plan_cache_mode TO force_generic_plan;
EXECUTE p1(1);
a
---
1
(1 row)
SET plan_cache_mode TO force_custom_plan;
EXECUTE p1(1);
a
---
1
(1 row)
SELECT calls, generic_plan_calls, custom_plan_calls, query FROM pg_stat_statements
ORDER BY query COLLATE "C";
calls | generic_plan_calls | custom_plan_calls | query
-------+--------------------+-------------------+----------------------------------------------------
2 | 1 | 1 | PREPARE p1 AS SELECT $1 AS a
1 | 0 | 0 | SELECT pg_stat_statements_reset() IS NOT NULL AS t
2 | 0 | 0 | SET plan_cache_mode TO $1
(3 rows)
DEALLOCATE p1;
-- Extended query protocol
SELECT pg_stat_statements_reset() IS NOT NULL AS t;
t
---
t
(1 row)
SELECT $1 AS a \parse p1
SET plan_cache_mode TO force_generic_plan;
\bind_named p1 1
;
a
---
1
(1 row)
SET plan_cache_mode TO force_custom_plan;
\bind_named p1 1
;
a
---
1
(1 row)
SELECT calls, generic_plan_calls, custom_plan_calls, query FROM pg_stat_statements
ORDER BY query COLLATE "C";
calls | generic_plan_calls | custom_plan_calls | query
-------+--------------------+-------------------+----------------------------------------------------
2 | 1 | 1 | SELECT $1 AS a
1 | 0 | 0 | SELECT pg_stat_statements_reset() IS NOT NULL AS t
2 | 0 | 0 | SET plan_cache_mode TO $1
(3 rows)
\close_prepared p1
-- EXPLAIN [ANALYZE] EXECUTE
SET pg_stat_statements.track = 'all';
SELECT pg_stat_statements_reset() IS NOT NULL AS t;
t
---
t
(1 row)
PREPARE p1 AS SELECT $1;
SET plan_cache_mode TO force_generic_plan;
EXPLAIN (COSTS OFF) EXECUTE p1(1);
QUERY PLAN
------------
Result
(1 row)
EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF, BUFFERS OFF) EXECUTE p1(1);
QUERY PLAN
-----------------------------------
Result (actual rows=1.00 loops=1)
(1 row)
SET plan_cache_mode TO force_custom_plan;
EXPLAIN (COSTS OFF) EXECUTE p1(1);
QUERY PLAN
------------
Result
(1 row)
EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF, BUFFERS OFF) EXECUTE p1(1);
QUERY PLAN
-----------------------------------
Result (actual rows=1.00 loops=1)
(1 row)
SELECT calls, generic_plan_calls, custom_plan_calls, toplevel, query FROM pg_stat_statements
ORDER BY query COLLATE "C";
calls | generic_plan_calls | custom_plan_calls | toplevel | query
-------+--------------------+-------------------+----------+----------------------------------------------------------------------------------
2 | 0 | 0 | t | EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF, BUFFERS OFF) EXECUTE p1(1)
2 | 0 | 0 | t | EXPLAIN (COSTS OFF) EXECUTE p1(1)
4 | 2 | 2 | f | PREPARE p1 AS SELECT $1
1 | 0 | 0 | t | SELECT pg_stat_statements_reset() IS NOT NULL AS t
2 | 0 | 0 | t | SET plan_cache_mode TO $1
(5 rows)
RESET pg_stat_statements.track;
DEALLOCATE p1;
-- Functions/procedures
SET pg_stat_statements.track = 'all';
SELECT pg_stat_statements_reset() IS NOT NULL AS t;
t
---
t
(1 row)
SET plan_cache_mode TO force_generic_plan;
SELECT select_one_func(1);
select_one_func
-----------------
(1 row)
CALL select_one_proc(1);
SET plan_cache_mode TO force_custom_plan;
SELECT select_one_func(1);
select_one_func
-----------------
(1 row)
CALL select_one_proc(1);
SELECT calls, generic_plan_calls, custom_plan_calls, toplevel, query FROM pg_stat_statements
ORDER BY query COLLATE "C";
calls | generic_plan_calls | custom_plan_calls | toplevel | query
-------+--------------------+-------------------+----------+----------------------------------------------------
2 | 0 | 0 | t | CALL select_one_proc($1)
4 | 2 | 2 | f | SELECT $1
1 | 0 | 0 | t | SELECT pg_stat_statements_reset() IS NOT NULL AS t
2 | 0 | 0 | t | SELECT select_one_func($1)
2 | 0 | 0 | t | SET plan_cache_mode TO $1
(5 rows)
--
-- EXPLAIN [ANALYZE] EXECUTE + functions/procedures
--
SET pg_stat_statements.track = 'all';
SELECT pg_stat_statements_reset() IS NOT NULL AS t;
t
---
t
(1 row)
SET plan_cache_mode TO force_generic_plan;
EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF, BUFFERS OFF) SELECT select_one_func(1);
QUERY PLAN
-----------------------------------
Result (actual rows=1.00 loops=1)
(1 row)
EXPLAIN (COSTS OFF) SELECT select_one_func(1);
QUERY PLAN
------------
Result
(1 row)
CALL select_one_proc(1);
SET plan_cache_mode TO force_custom_plan;
EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF, BUFFERS OFF) SELECT select_one_func(1);
QUERY PLAN
-----------------------------------
Result (actual rows=1.00 loops=1)
(1 row)
EXPLAIN (COSTS OFF) SELECT select_one_func(1);
QUERY PLAN
------------
Result
(1 row)
CALL select_one_proc(1);
SELECT calls, generic_plan_calls, custom_plan_calls, toplevel, query FROM pg_stat_statements
ORDER BY query COLLATE "C", toplevel;
calls | generic_plan_calls | custom_plan_calls | toplevel | query
-------+--------------------+-------------------+----------+------------------------------------------------------------------------------------------------
2 | 0 | 0 | t | CALL select_one_proc($1)
2 | 0 | 0 | t | EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF, BUFFERS OFF) SELECT select_one_func($1)
4 | 0 | 0 | f | EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF, BUFFERS OFF) SELECT select_one_func($1);
2 | 0 | 0 | t | EXPLAIN (COSTS OFF) SELECT select_one_func($1)
4 | 2 | 2 | f | SELECT $1
1 | 0 | 0 | t | SELECT pg_stat_statements_reset() IS NOT NULL AS t
2 | 0 | 0 | t | SET plan_cache_mode TO $1
(7 rows)
RESET pg_stat_statements.track;
--
-- Cleanup
--
DROP FUNCTION select_one_func(int);
DROP PROCEDURE select_one_proc(int);

View File

@ -702,7 +702,7 @@ SELECT calls, rows, query FROM pg_stat_statements ORDER BY query COLLATE "C";
1 | 13 | CREATE MATERIALIZED VIEW pgss_matv AS SELECT * FROM pgss_ctas
1 | 10 | CREATE TABLE pgss_ctas AS SELECT a, $1 b FROM generate_series($2, $3) a
1 | 0 | DECLARE pgss_cursor CURSOR FOR SELECT * FROM pgss_matv
1 | 5 | FETCH FORWARD $1 pgss_cursor
1 | 5 | FETCH FORWARD 5 pgss_cursor
1 | 7 | FETCH FORWARD ALL pgss_cursor
1 | 1 | FETCH NEXT pgss_cursor
1 | 13 | REFRESH MATERIALIZED VIEW pgss_matv

View File

@ -21,7 +21,6 @@ contrib_targets += pg_stat_statements
install_data(
'pg_stat_statements.control',
'pg_stat_statements--1.4.sql',
'pg_stat_statements--1.12--1.13.sql',
'pg_stat_statements--1.11--1.12.sql',
'pg_stat_statements--1.10--1.11.sql',
'pg_stat_statements--1.9--1.10.sql',
@ -55,7 +54,6 @@ tests += {
'privileges',
'extended',
'parallel',
'plancache',
'cleanup',
'oldextversions',
'squashing',

View File

@ -1,78 +0,0 @@
/* contrib/pg_stat_statements/pg_stat_statements--1.12--1.13.sql */
-- complain if script is sourced in psql, rather than via ALTER EXTENSION
\echo Use "ALTER EXTENSION pg_stat_statements UPDATE TO '1.13'" to load this file. \quit
/* First we have to remove them from the extension */
ALTER EXTENSION pg_stat_statements DROP VIEW pg_stat_statements;
ALTER EXTENSION pg_stat_statements DROP FUNCTION pg_stat_statements(boolean);
/* Then we can drop them */
DROP VIEW pg_stat_statements;
DROP FUNCTION pg_stat_statements(boolean);
/* Now redefine */
CREATE FUNCTION pg_stat_statements(IN showtext boolean,
OUT userid oid,
OUT dbid oid,
OUT toplevel bool,
OUT queryid bigint,
OUT query text,
OUT plans int8,
OUT total_plan_time float8,
OUT min_plan_time float8,
OUT max_plan_time float8,
OUT mean_plan_time float8,
OUT stddev_plan_time float8,
OUT calls int8,
OUT total_exec_time float8,
OUT min_exec_time float8,
OUT max_exec_time float8,
OUT mean_exec_time float8,
OUT stddev_exec_time float8,
OUT rows int8,
OUT shared_blks_hit int8,
OUT shared_blks_read int8,
OUT shared_blks_dirtied int8,
OUT shared_blks_written int8,
OUT local_blks_hit int8,
OUT local_blks_read int8,
OUT local_blks_dirtied int8,
OUT local_blks_written int8,
OUT temp_blks_read int8,
OUT temp_blks_written int8,
OUT shared_blk_read_time float8,
OUT shared_blk_write_time float8,
OUT local_blk_read_time float8,
OUT local_blk_write_time float8,
OUT temp_blk_read_time float8,
OUT temp_blk_write_time float8,
OUT wal_records int8,
OUT wal_fpi int8,
OUT wal_bytes numeric,
OUT wal_buffers_full int8,
OUT jit_functions int8,
OUT jit_generation_time float8,
OUT jit_inlining_count int8,
OUT jit_inlining_time float8,
OUT jit_optimization_count int8,
OUT jit_optimization_time float8,
OUT jit_emission_count int8,
OUT jit_emission_time float8,
OUT jit_deform_count int8,
OUT jit_deform_time float8,
OUT parallel_workers_to_launch int8,
OUT parallel_workers_launched int8,
OUT generic_plan_calls int8,
OUT custom_plan_calls int8,
OUT stats_since timestamp with time zone,
OUT minmax_stats_since timestamp with time zone
)
RETURNS SETOF record
AS 'MODULE_PATHNAME', 'pg_stat_statements_1_13'
LANGUAGE C STRICT VOLATILE PARALLEL SAFE;
CREATE VIEW pg_stat_statements AS
SELECT * FROM pg_stat_statements(true);
GRANT SELECT ON pg_stat_statements TO PUBLIC;

View File

@ -85,7 +85,7 @@ PG_MODULE_MAGIC_EXT(
#define PGSS_TEXT_FILE PG_STAT_TMP_DIR "/pgss_query_texts.stat"
/* Magic number identifying the stats file format */
static const uint32 PGSS_FILE_HEADER = 0x20250731;
static const uint32 PGSS_FILE_HEADER = 0x20220408;
/* PostgreSQL major version number, changes in which invalidate all entries */
static const uint32 PGSS_PG_MAJOR_VERSION = PG_VERSION_NUM / 100;
@ -114,7 +114,6 @@ typedef enum pgssVersion
PGSS_V1_10,
PGSS_V1_11,
PGSS_V1_12,
PGSS_V1_13,
} pgssVersion;
typedef enum pgssStoreKind
@ -139,6 +138,7 @@ typedef enum pgssStoreKind
* If you add a new key to this struct, make sure to teach pgss_store() to
* zero the padding bytes. Otherwise, things will break, because pgss_hash is
* created using HASH_BLOBS, and thus tag_hash is used to hash this.
*/
typedef struct pgssHashKey
{
@ -210,8 +210,6 @@ typedef struct Counters
* to be launched */
int64 parallel_workers_launched; /* # of parallel workers actually
* launched */
int64 generic_plan_calls; /* number of calls using a generic plan */
int64 custom_plan_calls; /* number of calls using a custom plan */
} Counters;
/*
@ -325,7 +323,6 @@ PG_FUNCTION_INFO_V1(pg_stat_statements_1_9);
PG_FUNCTION_INFO_V1(pg_stat_statements_1_10);
PG_FUNCTION_INFO_V1(pg_stat_statements_1_11);
PG_FUNCTION_INFO_V1(pg_stat_statements_1_12);
PG_FUNCTION_INFO_V1(pg_stat_statements_1_13);
PG_FUNCTION_INFO_V1(pg_stat_statements);
PG_FUNCTION_INFO_V1(pg_stat_statements_info);
@ -358,8 +355,7 @@ static void pgss_store(const char *query, int64 queryId,
const struct JitInstrumentation *jitusage,
JumbleState *jstate,
int parallel_workers_to_launch,
int parallel_workers_launched,
PlannedStmtOrigin planOrigin);
int parallel_workers_launched);
static void pg_stat_statements_internal(FunctionCallInfo fcinfo,
pgssVersion api_version,
bool showtext);
@ -881,8 +877,7 @@ pgss_post_parse_analyze(ParseState *pstate, Query *query, JumbleState *jstate)
NULL,
jstate,
0,
0,
PLAN_STMT_UNKNOWN);
0);
}
/*
@ -962,8 +957,7 @@ pgss_planner(Query *parse,
NULL,
NULL,
0,
0,
result->planOrigin);
0);
}
else
{
@ -1097,8 +1091,7 @@ pgss_ExecutorEnd(QueryDesc *queryDesc)
queryDesc->estate->es_jit ? &queryDesc->estate->es_jit->instr : NULL,
NULL,
queryDesc->estate->es_parallel_workers_to_launch,
queryDesc->estate->es_parallel_workers_launched,
queryDesc->plannedstmt->planOrigin);
queryDesc->estate->es_parallel_workers_launched);
}
if (prev_ExecutorEnd)
@ -1231,8 +1224,7 @@ pgss_ProcessUtility(PlannedStmt *pstmt, const char *queryString,
NULL,
NULL,
0,
0,
pstmt->planOrigin);
0);
}
else
{
@ -1295,8 +1287,7 @@ pgss_store(const char *query, int64 queryId,
const struct JitInstrumentation *jitusage,
JumbleState *jstate,
int parallel_workers_to_launch,
int parallel_workers_launched,
PlannedStmtOrigin planOrigin)
int parallel_workers_launched)
{
pgssHashKey key;
pgssEntry *entry;
@ -1504,12 +1495,6 @@ pgss_store(const char *query, int64 queryId,
entry->counters.parallel_workers_to_launch += parallel_workers_to_launch;
entry->counters.parallel_workers_launched += parallel_workers_launched;
/* plan cache counters */
if (planOrigin == PLAN_STMT_CACHE_GENERIC)
entry->counters.generic_plan_calls++;
else if (planOrigin == PLAN_STMT_CACHE_CUSTOM)
entry->counters.custom_plan_calls++;
SpinLockRelease(&entry->mutex);
}
@ -1577,8 +1562,7 @@ pg_stat_statements_reset(PG_FUNCTION_ARGS)
#define PG_STAT_STATEMENTS_COLS_V1_10 43
#define PG_STAT_STATEMENTS_COLS_V1_11 49
#define PG_STAT_STATEMENTS_COLS_V1_12 52
#define PG_STAT_STATEMENTS_COLS_V1_13 54
#define PG_STAT_STATEMENTS_COLS 54 /* maximum of above */
#define PG_STAT_STATEMENTS_COLS 52 /* maximum of above */
/*
* Retrieve statement statistics.
@ -1590,16 +1574,6 @@ pg_stat_statements_reset(PG_FUNCTION_ARGS)
* expected API version is identified by embedding it in the C name of the
* function. Unfortunately we weren't bright enough to do that for 1.1.
*/
Datum
pg_stat_statements_1_13(PG_FUNCTION_ARGS)
{
bool showtext = PG_GETARG_BOOL(0);
pg_stat_statements_internal(fcinfo, PGSS_V1_13, showtext);
return (Datum) 0;
}
Datum
pg_stat_statements_1_12(PG_FUNCTION_ARGS)
{
@ -1758,10 +1732,6 @@ pg_stat_statements_internal(FunctionCallInfo fcinfo,
if (api_version != PGSS_V1_12)
elog(ERROR, "incorrect number of output arguments");
break;
case PG_STAT_STATEMENTS_COLS_V1_13:
if (api_version != PGSS_V1_13)
elog(ERROR, "incorrect number of output arguments");
break;
default:
elog(ERROR, "incorrect number of output arguments");
}
@ -2014,11 +1984,6 @@ pg_stat_statements_internal(FunctionCallInfo fcinfo,
values[i++] = Int64GetDatumFast(tmp.parallel_workers_to_launch);
values[i++] = Int64GetDatumFast(tmp.parallel_workers_launched);
}
if (api_version >= PGSS_V1_13)
{
values[i++] = Int64GetDatumFast(tmp.generic_plan_calls);
values[i++] = Int64GetDatumFast(tmp.custom_plan_calls);
}
if (api_version >= PGSS_V1_11)
{
values[i++] = TimestampTzGetDatum(stats_since);
@ -2034,7 +1999,6 @@ pg_stat_statements_internal(FunctionCallInfo fcinfo,
api_version == PGSS_V1_10 ? PG_STAT_STATEMENTS_COLS_V1_10 :
api_version == PGSS_V1_11 ? PG_STAT_STATEMENTS_COLS_V1_11 :
api_version == PGSS_V1_12 ? PG_STAT_STATEMENTS_COLS_V1_12 :
api_version == PGSS_V1_13 ? PG_STAT_STATEMENTS_COLS_V1_13 :
-1 /* fail if you forget to update this assert */ ));
tuplestore_putvalues(rsinfo->setResult, rsinfo->setDesc, values, nulls);
@ -2712,8 +2676,8 @@ entry_reset(Oid userid, Oid dbid, int64 queryid, bool minmax_only)
HASH_SEQ_STATUS hash_seq;
pgssEntry *entry;
FILE *qfile;
int64 num_entries;
int64 num_remove = 0;
long num_entries;
long num_remove = 0;
pgssHashKey key;
TimestampTz stats_reset;

View File

@ -1,5 +1,5 @@
# pg_stat_statements extension
comment = 'track planning and execution statistics of all SQL statements executed'
default_version = '1.13'
default_version = '1.12'
module_pathname = '$libdir/pg_stat_statements'
relocatable = true

View File

@ -28,46 +28,3 @@ COMMIT;
SELECT calls, rows, query FROM pg_stat_statements ORDER BY query COLLATE "C";
SELECT pg_stat_statements_reset() IS NOT NULL AS t;
-- Normalization of FETCH statements
BEGIN;
DECLARE pgss_cursor CURSOR FOR SELECT FROM generate_series(1, 10);
-- implicit directions
FETCH pgss_cursor;
FETCH 1 pgss_cursor;
FETCH 2 pgss_cursor;
FETCH -1 pgss_cursor;
-- explicit NEXT
FETCH NEXT pgss_cursor;
-- explicit PRIOR
FETCH PRIOR pgss_cursor;
-- explicit FIRST
FETCH FIRST pgss_cursor;
-- explicit LAST
FETCH LAST pgss_cursor;
-- explicit ABSOLUTE
FETCH ABSOLUTE 1 pgss_cursor;
FETCH ABSOLUTE 2 pgss_cursor;
FETCH ABSOLUTE -1 pgss_cursor;
-- explicit RELATIVE
FETCH RELATIVE 1 pgss_cursor;
FETCH RELATIVE 2 pgss_cursor;
FETCH RELATIVE -1 pgss_cursor;
-- explicit FORWARD
FETCH ALL pgss_cursor;
-- explicit FORWARD ALL
FETCH FORWARD ALL pgss_cursor;
-- explicit FETCH FORWARD
FETCH FORWARD pgss_cursor;
FETCH FORWARD 1 pgss_cursor;
FETCH FORWARD 2 pgss_cursor;
FETCH FORWARD -1 pgss_cursor;
-- explicit FETCH BACKWARD
FETCH BACKWARD pgss_cursor;
FETCH BACKWARD 1 pgss_cursor;
FETCH BACKWARD 2 pgss_cursor;
FETCH BACKWARD -1 pgss_cursor;
-- explicit BACKWARD ALL
FETCH BACKWARD ALL pgss_cursor;
COMMIT;
SELECT calls, query FROM pg_stat_statements ORDER BY query COLLATE "C";

Some files were not shown because too many files have changed in this diff Show More