Compare commits

...

10 Commits

Author SHA1 Message Date
Amit Kapila
d9e225f275 Change the LOG level in 040_standby_failover_slots_sync.pl to DEBUG2.
Temporarily change the log level of 040_standby_failover_slots_sync.pl to
DEBUG2. This is to get more information about BF failures. We will reset
it back to default once the tests are stabilized.

Author: Hou Zhijie
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
Discussion: https://postgr.es/m/OS0PR01MB571633C23B2A4CAC5FB0371A944C2@OS0PR01MB5716.jpnprd01.prod.outlook.com
2024-02-16 10:13:51 +05:30
Amit Kapila
7a424ece48 Add more LOG and DEBUG messages for slot synchronization.
This provides more information about remote slots during synchronization
which helps in debugging bugs and BF failures due to test case issues. We
might later want to change the LOG message added by this patch to DEBUG1.

Author: Hou Zhijie
Reviewed-by: Amit Kapila, Bertrand Drouvot
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
Discussion: https://postgr.es/m/OS0PR01MB571633C23B2A4CAC5FB0371A944C2@OS0PR01MB5716.jpnprd01.prod.outlook.com
2024-02-16 09:02:54 +05:30
David Rowley
1fe66680c0 Attempt to stabilize flapping regression test
Per buildfarm animal mylodon, the plan for this test was sometimes
swapping the join order for tenk1 and tenk2.  Given that add_path() has
no code that would cause this fluctuation when given paths with consistent
costs, this indicates that the costs must be fluctuating in some runs.
The only proven reason I've seen where that could happen was slight
variations in pg_class.relpages for some tables.  This was demonstrated to
be true by f03a9ca43 and related discussion.  Manually adjusting tenk2's
pg_class.relpages by subtracting just 1 page does cause the plan to change
for this test.

Here we've not gone to the same lengths to prove that's what's going on
in this case.  Proving that does not seem worth the time.  Let's just
shrink one side of the join so the additional cost of the swapped join
order is sufficiently different that if the relpages estimate is off a few
pages that the planner still shouldn't swap the join order.

Reported-by: Thomas Munro
Author: Andy Fan, David Rowley
Discussion: https://postgr.es/m/CA+hUKGLqC-NobKYfjxNM3Gexv9OJ-Fhvy9bugUcXsZjTqH7W=Q@mail.gmail.com
2024-02-16 15:01:29 +13:00
Alexander Korotkov
bf82f43790 Followup fixes for transaction_timeout
Don't deal with transaction timeout in PostgresMain().  Instead, release
transaction timeout activated by StartTransaction() in
CommitTransaction()/AbortTransaction()/PrepareTransaction().  Deal with both
enabling and disabling transaction timeout in assign_transaction_timeout().

Also, remove potentially flaky timeouts-long isolation test, which has no
guarantees to pass on slow/busy machines.

Reported-by: Andres Freund
Discussion: https://postgr.es/m/20240215230856.pc6k57tqxt7fhldm%40awork3.anarazel.de
2024-02-16 03:36:38 +02:00
Alexander Korotkov
51efe38cb9 Introduce transaction_timeout
This commit adds timeout that is expected to be used as a prevention
of long-running queries. Any session within the transaction will be
terminated after spanning longer than this timeout.

However, this timeout is not applied to prepared transactions.
Only transactions with user connections are affected.

Discussion: https://postgr.es/m/CAAhFRxiQsRs2Eq5kCo9nXE3HTugsAAJdSQSmxncivebAxdmBjQ%40mail.gmail.com
Author: Andrey Borodin <amborodin@acm.org>
Author: Japin Li <japinli@hotmail.com>
Author: Junwang Zhao <zhjwpku@gmail.com>
Reviewed-by: Nikolay Samokhvalov <samokhvalov@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: bt23nguyent <bt23nguyent@oss.nttdata.com>
Reviewed-by: Yuhang Qiu <iamqyh@gmail.com>
2024-02-15 23:56:12 +02:00
Tom Lane
5c9f2f9398 Doc: improve a couple of comments in postgresql.conf.sample.
Clarify comments associated with max_parallel_workers and
related settings.

Per bug #18343 from Christopher Kline.

Discussion: https://postgr.es/m/18343-3a5e903d1d3692ab@postgresql.org
2024-02-15 16:45:03 -05:00
Alexander Korotkov
9f13376396 Pull up ANY-SUBLINK with the necessary lateral support.
For ANY-SUBLINK, we adopted a two-stage pull-up approach to handle
different types of scenarios. In the first stage, the sublink is pulled up
as a subquery. Because of this, when writing this code, we did not have
the ability to perform lateral joins, and therefore, we were unable to
pull up Var with varlevelsup=1. Now that we have the ability to use
lateral joins, we can eliminate this limitation.

Author: Andy Fan <zhihui.fan1213@gmail.com>
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Richard Guo <guofenglinux@gmail.com>
Reviewed-by: Alena Rybakina <lena.ribackina@yandex.ru>
Reviewed-by: Andrey Lepikhov <a.lepikhov@postgrespro.ru>
2024-02-15 12:06:12 +02:00
Peter Eisentraut
995d400cec Allow passing extra options to initdb for tests
Setting the environment variable PG_TEST_INITDB_EXTRA_OPTS passes
extra options to initdb run by pg_regress or
PostgreSQL::Test::Cluster's init.

This can be useful for a wide variety of uses, like running all tests
with checksums enabled, or with JIT enabled, or with different GUC
settings, or with different locale settings.  (Not all tests are going
to pass with arbitrary options, but it is useful to run this against
specific test suites.)

Reviewed-by: Ian Lawrence Barwick <barwick@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/d4d2ad9f-1c1d-47a1-bb4d-c10a747d4f15%40eisentraut.org
2024-02-15 10:29:18 +01:00
Amit Kapila
9bc1eee988 Another try to fix BF failure introduced in commit ddd5f4f54a.
Before attempting to sync the slot on standby by
pg_sync_replication_slots(), ensure that on the primary restart_lsn for
the slot has moved to a recent WAL position, by re-creating the
subscription and the logical slot.

Author: Hou Zhijie
Discussion: https://postgr.es/m/CAA4eK1+d5Lne8vCAn0un4SP9x-ZBr2-xfxg01uSfeBTSCKFZoQ@mail.gmail.com
2024-02-15 10:37:28 +05:30
David Rowley
0c444a70f2 Simplify PathKey checking code
pathkeys_useful_for_ordering() contained some needless checks to return
0 when either root->query_pathkeys or pathkeys lists were empty.  This is
already handled by pathkeys_count_contained_in(), so let's have it do the
work instead of having redundant checks.

Similarly, in pathkeys_useful_for_grouping(), checking pathkeys is an
empty list just before looping over it isn't required.  Technically,
neither is the list empty check for group_pathkeys, but I felt a bit
more work would have to be done to get the equivalent behavior if we'd
left it up to the foreach loop to call list_member_ptr().

This was noticed by Andy while he was reviewing a patch to improve the
UNION planner.  Since that patch adds another function similar to
pathkeys_useful_for_ordering() and since I wasn't planning to copy these
redundant checks over to the new function, let's adjust the existing
code so that both functions will be consistent.

Author: Andy Fan
Discussion: https://postgr.es/m/87o7cti48f.fsf@163.com
2024-02-15 18:01:28 +13:00
36 changed files with 583 additions and 59 deletions

View File

@ -11894,7 +11894,7 @@ CREATE FOREIGN TABLE foreign_tbl (b int)
CREATE FOREIGN TABLE foreign_tbl2 () INHERITS (foreign_tbl)
SERVER loopback OPTIONS (table_name 'base_tbl');
EXPLAIN (VERBOSE, COSTS OFF)
SELECT a FROM base_tbl WHERE a IN (SELECT a FROM foreign_tbl);
SELECT a FROM base_tbl WHERE (a, random() > 0) IN (SELECT a, random() > 0 FROM foreign_tbl);
QUERY PLAN
-----------------------------------------------------------------------------
Seq Scan on public.base_tbl
@ -11902,7 +11902,7 @@ SELECT a FROM base_tbl WHERE a IN (SELECT a FROM foreign_tbl);
Filter: (SubPlan 1)
SubPlan 1
-> Result
Output: base_tbl.a
Output: base_tbl.a, (random() > '0'::double precision)
-> Append
-> Async Foreign Scan on public.foreign_tbl foreign_tbl_1
Remote SQL: SELECT NULL FROM public.base_tbl
@ -11910,7 +11910,7 @@ SELECT a FROM base_tbl WHERE a IN (SELECT a FROM foreign_tbl);
Remote SQL: SELECT NULL FROM public.base_tbl
(11 rows)
SELECT a FROM base_tbl WHERE a IN (SELECT a FROM foreign_tbl);
SELECT a FROM base_tbl WHERE (a, random() > 0) IN (SELECT a, random() > 0 FROM foreign_tbl);
a
---
1

View File

@ -3988,8 +3988,8 @@ CREATE FOREIGN TABLE foreign_tbl2 () INHERITS (foreign_tbl)
SERVER loopback OPTIONS (table_name 'base_tbl');
EXPLAIN (VERBOSE, COSTS OFF)
SELECT a FROM base_tbl WHERE a IN (SELECT a FROM foreign_tbl);
SELECT a FROM base_tbl WHERE a IN (SELECT a FROM foreign_tbl);
SELECT a FROM base_tbl WHERE (a, random() > 0) IN (SELECT a, random() > 0 FROM foreign_tbl);
SELECT a FROM base_tbl WHERE (a, random() > 0) IN (SELECT a, random() > 0 FROM foreign_tbl);
-- Clean up
DROP FOREIGN TABLE foreign_tbl CASCADE;

View File

@ -9140,6 +9140,42 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;
</listitem>
</varlistentry>
<varlistentry id="guc-transaction-timeout" xreflabel="transaction_timeout">
<term><varname>transaction_timeout</varname> (<type>integer</type>)
<indexterm>
<primary><varname>transaction_timeout</varname> configuration parameter</primary>
</indexterm>
</term>
<listitem>
<para>
Terminate any session that spans longer than the specified amount of
time in the transaction. The limit applies both to explicit transactions
(started with <command>BEGIN</command>) and to an implicitly started
transaction corresponding to a single statement.
If this value is specified without units, it is taken as milliseconds.
A value of zero (the default) disables the timeout.
</para>
<para>
If <varname>transaction_timeout</varname> is shorter or equal to
<varname>idle_in_transaction_session_timeout</varname> or <varname>statement_timeout</varname>
<varname>transaction_timeout</varname> will invalidate the longer timeout.
</para>
<para>
Setting <varname>transaction_timeout</varname> in
<filename>postgresql.conf</filename> is not recommended because it would
affect all sessions.
</para>
<note>
<para>
Prepared transactions are not subject to this timeout.
</para>
</note>
</listitem>
</varlistentry>
<varlistentry id="guc-lock-timeout" xreflabel="lock_timeout">
<term><varname>lock_timeout</varname> (<type>integer</type>)
<indexterm>

View File

@ -390,12 +390,37 @@ make check LANG=C ENCODING=EUC_JP
<title>Custom Server Settings</title>
<para>
Custom server settings to use when running a regression test suite can be
There are several ways to use custom server settings when running a test
suite. This can be useful to enable additional logging, adjust resource
limits, or enable extra run-time checks such as <xref
linkend="guc-debug-discard-caches"/>. But note that not all tests can be
expected to pass cleanly with arbitrary settings.
</para>
<para>
Extra options can be passed to the various <command>initdb</command>
commands that are run internally during test setup using the environment
variable <envar>PG_TEST_INITDB_EXTRA_OPTS</envar>. For example, to run a
test with checksums enabled and a custom WAL segment size and
<varname>work_mem</varname> setting, use:
<screen>
make check PG_TEST_INITDB_EXTRA_OPTS='-k --wal-segsize=4 -c work_mem=50MB'
</screen>
</para>
<para>
For the core regression test suite and other tests driven by
<command>pg_regress</command>, custom run-time server settings can also be
set in the <varname>PGOPTIONS</varname> environment variable (for settings
that allow this):
that allow this), for example:
<screen>
make check PGOPTIONS="-c debug_parallel_query=regress -c work_mem=50MB"
</screen>
(This makes use of functionality provided by libpq; see <xref
linkend="libpq-connect-options"/> for details.)
</para>
<para>
When running against a temporary installation, custom settings can also be
set by supplying a pre-written <filename>postgresql.conf</filename>:
<screen>
@ -405,11 +430,6 @@ make check EXTRA_REGRESS_OPTS="--temp-config=test_postgresql.conf"
</screen>
</para>
<para>
This can be useful to enable additional logging, adjust resource limits,
or enable extra run-time checks such as <xref
linkend="guc-debug-discard-caches"/>.
</para>
</sect2>
<sect2 id="regress-run-extra-tests">

View File

@ -2139,6 +2139,10 @@ StartTransaction(void)
*/
s->state = TRANS_INPROGRESS;
/* Schedule transaction timeout */
if (TransactionTimeout > 0)
enable_timeout_after(TRANSACTION_TIMEOUT, TransactionTimeout);
ShowTransactionState("StartTransaction");
}
@ -2258,6 +2262,10 @@ CommitTransaction(void)
s->state = TRANS_COMMIT;
s->parallelModeLevel = 0;
/* Disable transaction timeout */
if (TransactionTimeout > 0)
disable_timeout(TRANSACTION_TIMEOUT, false);
if (!is_parallel_worker)
{
/*
@ -2531,6 +2539,10 @@ PrepareTransaction(void)
*/
s->state = TRANS_PREPARE;
/* Disable transaction timeout */
if (TransactionTimeout > 0)
disable_timeout(TRANSACTION_TIMEOUT, false);
prepared_at = GetCurrentTimestamp();
/*
@ -2703,6 +2715,10 @@ AbortTransaction(void)
/* Prevent cancel/die interrupt while cleaning up */
HOLD_INTERRUPTS();
/* Disable transaction timeout */
if (TransactionTimeout > 0)
disable_timeout(TRANSACTION_TIMEOUT, false);
/* Make sure we have a valid memory context and resource owner */
AtAbort_Memory();
AtAbort_ResourceOwner();

View File

@ -2143,12 +2143,6 @@ pathkeys_useful_for_ordering(PlannerInfo *root, List *pathkeys)
{
int n_common_pathkeys;
if (root->query_pathkeys == NIL)
return 0; /* no special ordering requested */
if (pathkeys == NIL)
return 0; /* unordered path */
(void) pathkeys_count_contained_in(root->query_pathkeys, pathkeys,
&n_common_pathkeys);
@ -2184,10 +2178,6 @@ pathkeys_useful_for_grouping(PlannerInfo *root, List *pathkeys)
if (root->group_pathkeys == NIL)
return 0;
/* unordered path */
if (pathkeys == NIL)
return 0;
/* walk the pathkeys and search for matching group key */
foreach(key, pathkeys)
{

View File

@ -1278,14 +1278,23 @@ convert_ANY_sublink_to_join(PlannerInfo *root, SubLink *sublink,
List *subquery_vars;
Node *quals;
ParseState *pstate;
Relids sub_ref_outer_relids;
bool use_lateral;
Assert(sublink->subLinkType == ANY_SUBLINK);
/*
* The sub-select must not refer to any Vars of the parent query. (Vars of
* higher levels should be okay, though.)
* If the sub-select refers to any Vars of the parent query, we so let's
* considering it as LATERAL. (Vars of higher levels don't matter here.)
*/
if (contain_vars_of_level((Node *) subselect, 1))
sub_ref_outer_relids = pull_varnos_of_level(NULL, (Node *) subselect, 1);
use_lateral = !bms_is_empty(sub_ref_outer_relids);
/*
* Check that sub-select refers nothing outside of available_rels of the
* parent query.
*/
if (!bms_is_subset(sub_ref_outer_relids, available_rels))
return NULL;
/*
@ -1323,7 +1332,7 @@ convert_ANY_sublink_to_join(PlannerInfo *root, SubLink *sublink,
nsitem = addRangeTableEntryForSubquery(pstate,
subselect,
makeAlias("ANY_subquery", NIL),
false,
use_lateral,
false);
rte = nsitem->p_rte;
parse->rtable = lappend(parse->rtable, rte);

View File

@ -586,6 +586,7 @@ AutoVacLauncherMain(int argc, char *argv[])
* regular maintenance from being executed.
*/
SetConfigOption("statement_timeout", "0", PGC_SUSET, PGC_S_OVERRIDE);
SetConfigOption("transaction_timeout", "0", PGC_SUSET, PGC_S_OVERRIDE);
SetConfigOption("lock_timeout", "0", PGC_SUSET, PGC_S_OVERRIDE);
SetConfigOption("idle_in_transaction_session_timeout", "0",
PGC_SUSET, PGC_S_OVERRIDE);
@ -1587,6 +1588,7 @@ AutoVacWorkerMain(int argc, char *argv[])
* regular maintenance from being executed.
*/
SetConfigOption("statement_timeout", "0", PGC_SUSET, PGC_S_OVERRIDE);
SetConfigOption("transaction_timeout", "0", PGC_SUSET, PGC_S_OVERRIDE);
SetConfigOption("lock_timeout", "0", PGC_SUSET, PGC_S_OVERRIDE);
SetConfigOption("idle_in_transaction_session_timeout", "0",
PGC_SUSET, PGC_S_OVERRIDE);

View File

@ -321,6 +321,9 @@ reserve_wal_for_local_slot(XLogRecPtr restart_lsn)
oldest_segno = XLogGetOldestSegno(cur_timeline);
}
elog(DEBUG1, "segno: %ld of purposed restart_lsn for the synced slot, oldest_segno: %ld available",
segno, oldest_segno);
/*
* If all required WAL is still there, great, otherwise retry. The
* slot should prevent further removal of WAL, unless there's a
@ -361,7 +364,18 @@ update_and_persist_local_synced_slot(RemoteSlot *remote_slot, Oid remote_dbid)
* current location when recreating the slot in the next cycle. It may
* take more time to create such a slot. Therefore, we keep this slot
* and attempt the synchronization in the next cycle.
*
* XXX should this be changed to elog(DEBUG1) perhaps?
*/
ereport(LOG,
errmsg("could not sync slot information as remote slot precedes local slot:"
" remote slot \"%s\": LSN (%X/%X), catalog xmin (%u) local slot: LSN (%X/%X), catalog xmin (%u)",
remote_slot->name,
LSN_FORMAT_ARGS(remote_slot->restart_lsn),
remote_slot->catalog_xmin,
LSN_FORMAT_ARGS(slot->data.restart_lsn),
slot->data.catalog_xmin));
return;
}

View File

@ -59,6 +59,7 @@ int DeadlockTimeout = 1000;
int StatementTimeout = 0;
int LockTimeout = 0;
int IdleInTransactionSessionTimeout = 0;
int TransactionTimeout = 0;
int IdleSessionTimeout = 0;
bool log_lock_waits = false;

View File

@ -3418,6 +3418,17 @@ ProcessInterrupts(void)
IdleInTransactionSessionTimeoutPending = false;
}
if (TransactionTimeoutPending)
{
/* As above, ignore the signal if the GUC has been reset to zero. */
if (TransactionTimeout > 0)
ereport(FATAL,
(errcode(ERRCODE_TRANSACTION_TIMEOUT),
errmsg("terminating connection due to transaction timeout")));
else
TransactionTimeoutPending = false;
}
if (IdleSessionTimeoutPending)
{
/* As above, ignore the signal if the GUC has been reset to zero. */
@ -3632,6 +3643,23 @@ check_log_stats(bool *newval, void **extra, GucSource source)
return true;
}
/* GUC assign hook for transaction_timeout */
void
assign_transaction_timeout(int newval, void *extra)
{
if (IsTransactionState())
{
/*
* If transaction_timeout GUC has changes within the transaction block
* enable or disable the timer correspondingly.
*/
if (newval > 0 && !get_timeout_active(TRANSACTION_TIMEOUT))
enable_timeout_after(TRANSACTION_TIMEOUT, newval);
else if (newval <= 0 && get_timeout_active(TRANSACTION_TIMEOUT))
disable_timeout(TRANSACTION_TIMEOUT, false);
}
}
/*
* set_debug_options --- apply "-d N" command line option
@ -4483,7 +4511,8 @@ PostgresMain(const char *dbname, const char *username)
pgstat_report_activity(STATE_IDLEINTRANSACTION_ABORTED, NULL);
/* Start the idle-in-transaction timer */
if (IdleInTransactionSessionTimeout > 0)
if (IdleInTransactionSessionTimeout > 0
&& (IdleInTransactionSessionTimeout < TransactionTimeout || TransactionTimeout == 0))
{
idle_in_transaction_timeout_enabled = true;
enable_timeout_after(IDLE_IN_TRANSACTION_SESSION_TIMEOUT,
@ -4496,7 +4525,8 @@ PostgresMain(const char *dbname, const char *username)
pgstat_report_activity(STATE_IDLEINTRANSACTION, NULL);
/* Start the idle-in-transaction timer */
if (IdleInTransactionSessionTimeout > 0)
if (IdleInTransactionSessionTimeout > 0
&& (IdleInTransactionSessionTimeout < TransactionTimeout || TransactionTimeout == 0))
{
idle_in_transaction_timeout_enabled = true;
enable_timeout_after(IDLE_IN_TRANSACTION_SESSION_TIMEOUT,
@ -5112,7 +5142,8 @@ enable_statement_timeout(void)
/* must be within an xact */
Assert(xact_started);
if (StatementTimeout > 0)
if (StatementTimeout > 0
&& (StatementTimeout < TransactionTimeout || TransactionTimeout == 0))
{
if (!get_timeout_active(STATEMENT_TIMEOUT))
enable_timeout_after(STATEMENT_TIMEOUT, StatementTimeout);

View File

@ -252,6 +252,7 @@ Section: Class 25 - Invalid Transaction State
25P01 E ERRCODE_NO_ACTIVE_SQL_TRANSACTION no_active_sql_transaction
25P02 E ERRCODE_IN_FAILED_SQL_TRANSACTION in_failed_sql_transaction
25P03 E ERRCODE_IDLE_IN_TRANSACTION_SESSION_TIMEOUT idle_in_transaction_session_timeout
25P04 E ERRCODE_TRANSACTION_TIMEOUT transaction_timeout
Section: Class 26 - Invalid SQL Statement Name

View File

@ -33,6 +33,7 @@ volatile sig_atomic_t ProcDiePending = false;
volatile sig_atomic_t CheckClientConnectionPending = false;
volatile sig_atomic_t ClientConnectionLost = false;
volatile sig_atomic_t IdleInTransactionSessionTimeoutPending = false;
volatile sig_atomic_t TransactionTimeoutPending = false;
volatile sig_atomic_t IdleSessionTimeoutPending = false;
volatile sig_atomic_t ProcSignalBarrierPending = false;
volatile sig_atomic_t LogMemoryContextPending = false;

View File

@ -75,6 +75,7 @@ static void ShutdownPostgres(int code, Datum arg);
static void StatementTimeoutHandler(void);
static void LockTimeoutHandler(void);
static void IdleInTransactionSessionTimeoutHandler(void);
static void TransactionTimeoutHandler(void);
static void IdleSessionTimeoutHandler(void);
static void IdleStatsUpdateTimeoutHandler(void);
static void ClientCheckTimeoutHandler(void);
@ -764,6 +765,7 @@ InitPostgres(const char *in_dbname, Oid dboid,
RegisterTimeout(LOCK_TIMEOUT, LockTimeoutHandler);
RegisterTimeout(IDLE_IN_TRANSACTION_SESSION_TIMEOUT,
IdleInTransactionSessionTimeoutHandler);
RegisterTimeout(TRANSACTION_TIMEOUT, TransactionTimeoutHandler);
RegisterTimeout(IDLE_SESSION_TIMEOUT, IdleSessionTimeoutHandler);
RegisterTimeout(CLIENT_CONNECTION_CHECK_TIMEOUT, ClientCheckTimeoutHandler);
RegisterTimeout(IDLE_STATS_UPDATE_TIMEOUT,
@ -1395,6 +1397,14 @@ LockTimeoutHandler(void)
kill(MyProcPid, SIGINT);
}
static void
TransactionTimeoutHandler(void)
{
TransactionTimeoutPending = true;
InterruptPending = true;
SetLatch(MyLatch);
}
static void
IdleInTransactionSessionTimeoutHandler(void)
{

View File

@ -2577,6 +2577,17 @@ struct config_int ConfigureNamesInt[] =
NULL, NULL, NULL
},
{
{"transaction_timeout", PGC_USERSET, CLIENT_CONN_STATEMENT,
gettext_noop("Sets the maximum allowed time in a transaction with a session (not a prepared transaction)."),
gettext_noop("A value of 0 turns off the timeout."),
GUC_UNIT_MS
},
&TransactionTimeout,
0, 0, INT_MAX,
NULL, assign_transaction_timeout, NULL
},
{
{"idle_session_timeout", PGC_USERSET, CLIENT_CONN_STATEMENT,
gettext_noop("Sets the maximum allowed idle time between queries, when not in a transaction."),

View File

@ -195,9 +195,9 @@
#effective_io_concurrency = 1 # 1-1000; 0 disables prefetching
#maintenance_io_concurrency = 10 # 1-1000; 0 disables prefetching
#max_worker_processes = 8 # (change requires restart)
#max_parallel_workers_per_gather = 2 # taken from max_parallel_workers
#max_parallel_maintenance_workers = 2 # taken from max_parallel_workers
#max_parallel_workers = 8 # maximum number of max_worker_processes that
#max_parallel_workers_per_gather = 2 # limited by max_parallel_workers
#max_parallel_maintenance_workers = 2 # limited by max_parallel_workers
#max_parallel_workers = 8 # number of max_worker_processes that
# can be used in parallel operations
#parallel_leader_participation = on
@ -701,6 +701,7 @@
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0 # in milliseconds, 0 is disabled
#transaction_timeout = 0 # in milliseconds, 0 is disabled
#lock_timeout = 0 # in milliseconds, 0 is disabled
#idle_in_transaction_session_timeout = 0 # in milliseconds, 0 is disabled
#idle_session_timeout = 0 # in milliseconds, 0 is disabled

View File

@ -3115,6 +3115,7 @@ _doSetFixedOutputState(ArchiveHandle *AH)
ahprintf(AH, "SET statement_timeout = 0;\n");
ahprintf(AH, "SET lock_timeout = 0;\n");
ahprintf(AH, "SET idle_in_transaction_session_timeout = 0;\n");
ahprintf(AH, "SET transaction_timeout = 0;\n");
/* Select the correct character set encoding */
ahprintf(AH, "SET client_encoding = '%s';\n",

View File

@ -1252,6 +1252,8 @@ setup_connection(Archive *AH, const char *dumpencoding,
ExecuteSqlStatement(AH, "SET lock_timeout = 0");
if (AH->remoteVersion >= 90600)
ExecuteSqlStatement(AH, "SET idle_in_transaction_session_timeout = 0");
if (AH->remoteVersion >= 170000)
ExecuteSqlStatement(AH, "SET transaction_timeout = 0");
/*
* Quote all identifiers, if requested.

View File

@ -117,6 +117,7 @@ init_libpq_conn(PGconn *conn)
run_simple_command(conn, "SET statement_timeout = 0");
run_simple_command(conn, "SET lock_timeout = 0");
run_simple_command(conn, "SET idle_in_transaction_session_timeout = 0");
run_simple_command(conn, "SET transaction_timeout = 0");
/*
* we don't intend to do any updates, put the connection in read-only mode

View File

@ -91,6 +91,7 @@ extern PGDLLIMPORT volatile sig_atomic_t InterruptPending;
extern PGDLLIMPORT volatile sig_atomic_t QueryCancelPending;
extern PGDLLIMPORT volatile sig_atomic_t ProcDiePending;
extern PGDLLIMPORT volatile sig_atomic_t IdleInTransactionSessionTimeoutPending;
extern PGDLLIMPORT volatile sig_atomic_t TransactionTimeoutPending;
extern PGDLLIMPORT volatile sig_atomic_t IdleSessionTimeoutPending;
extern PGDLLIMPORT volatile sig_atomic_t ProcSignalBarrierPending;
extern PGDLLIMPORT volatile sig_atomic_t LogMemoryContextPending;

View File

@ -429,6 +429,7 @@ extern PGDLLIMPORT int DeadlockTimeout;
extern PGDLLIMPORT int StatementTimeout;
extern PGDLLIMPORT int LockTimeout;
extern PGDLLIMPORT int IdleInTransactionSessionTimeout;
extern PGDLLIMPORT int TransactionTimeout;
extern PGDLLIMPORT int IdleSessionTimeout;
extern PGDLLIMPORT bool log_lock_waits;

View File

@ -155,6 +155,7 @@ extern void assign_timezone_abbreviations(const char *newval, void *extra);
extern bool check_transaction_deferrable(bool *newval, void **extra, GucSource source);
extern bool check_transaction_isolation(int *newval, void **extra, GucSource source);
extern bool check_transaction_read_only(bool *newval, void **extra, GucSource source);
extern void assign_transaction_timeout(int newval, void *extra);
extern const char *show_unix_socket_permissions(void);
extern bool check_wal_buffers(int *newval, void **extra, GucSource source);
extern bool check_wal_consistency_checking(char **newval, void **extra,

View File

@ -31,6 +31,7 @@ typedef enum TimeoutId
STANDBY_TIMEOUT,
STANDBY_LOCK_TIMEOUT,
IDLE_IN_TRANSACTION_SESSION_TIMEOUT,
TRANSACTION_TIMEOUT,
IDLE_SESSION_TIMEOUT,
IDLE_STATS_UPDATE_TIMEOUT,
CLIENT_CONNECTION_CHECK_TIMEOUT,

View File

@ -72,3 +72,6 @@ installcheck-prepared-txns: all temp-install
check-prepared-txns: all temp-install
$(pg_isolation_regress_check) --schedule=$(srcdir)/isolation_schedule prepared-transactions prepared-transactions-cic
check-timeouts: all temp-install
$(pg_isolation_regress_check) timeouts timeouts-long

View File

@ -0,0 +1,69 @@
Parsed test spec with 3 sessions
starting permutation: s7_begin s7_sleep s7_commit_and_chain s7_sleep s7_check s7_abort
step s7_begin:
BEGIN ISOLATION LEVEL READ COMMITTED;
SET transaction_timeout = '1s';
step s7_sleep: SELECT pg_sleep(0.6);
pg_sleep
--------
(1 row)
step s7_commit_and_chain: COMMIT AND CHAIN;
step s7_sleep: SELECT pg_sleep(0.6);
pg_sleep
--------
(1 row)
step s7_check: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s7';
count
-----
0
(1 row)
step s7_abort: ABORT;
starting permutation: s8_begin s8_sleep s8_select_1 s8_check checker_sleep checker_sleep s8_check
step s8_begin:
BEGIN ISOLATION LEVEL READ COMMITTED;
SET transaction_timeout = '900ms';
step s8_sleep: SELECT pg_sleep(0.6);
pg_sleep
--------
(1 row)
step s8_select_1: SELECT 1;
?column?
--------
1
(1 row)
step s8_check: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s8';
count
-----
0
(1 row)
step checker_sleep: SELECT pg_sleep(0.3);
pg_sleep
--------
(1 row)
step checker_sleep: SELECT pg_sleep(0.3);
pg_sleep
--------
(1 row)
step s8_check: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s8';
count
-----
0
(1 row)

View File

@ -1,4 +1,4 @@
Parsed test spec with 2 sessions
Parsed test spec with 7 sessions
starting permutation: rdtbl sto locktbl
step rdtbl: SELECT * FROM accounts;
@ -79,3 +79,80 @@ step slto: SET lock_timeout = '10s'; SET statement_timeout = '10ms';
step update: DELETE FROM accounts WHERE accountid = 'checking'; <waiting ...>
step update: <... completed>
ERROR: canceling statement due to statement timeout
starting permutation: stto s3_begin s3_sleep s3_check s3_abort
step stto: SET statement_timeout = '10ms'; SET transaction_timeout = '1s';
step s3_begin: BEGIN ISOLATION LEVEL READ COMMITTED;
step s3_sleep: SELECT pg_sleep(0.1);
ERROR: canceling statement due to statement timeout
step s3_check: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s3';
count
-----
1
(1 row)
step s3_abort: ABORT;
starting permutation: tsto s3_begin checker_sleep s3_check
step tsto: SET statement_timeout = '1s'; SET transaction_timeout = '10ms';
step s3_begin: BEGIN ISOLATION LEVEL READ COMMITTED;
step checker_sleep: SELECT pg_sleep(0.1);
pg_sleep
--------
(1 row)
step s3_check: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s3';
count
-----
0
(1 row)
starting permutation: itto s4_begin checker_sleep s4_check
step itto: SET idle_in_transaction_session_timeout = '10ms'; SET transaction_timeout = '1s';
step s4_begin: BEGIN ISOLATION LEVEL READ COMMITTED;
step checker_sleep: SELECT pg_sleep(0.1);
pg_sleep
--------
(1 row)
step s4_check: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s4';
count
-----
0
(1 row)
starting permutation: tito s5_begin checker_sleep s5_check
step tito: SET idle_in_transaction_session_timeout = '1s'; SET transaction_timeout = '10ms';
step s5_begin: BEGIN ISOLATION LEVEL READ COMMITTED;
step checker_sleep: SELECT pg_sleep(0.1);
pg_sleep
--------
(1 row)
step s5_check: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s5';
count
-----
0
(1 row)
starting permutation: s6_begin s6_tt checker_sleep s6_check
step s6_begin: BEGIN ISOLATION LEVEL READ COMMITTED;
step s6_tt: SET statement_timeout = '1s'; SET transaction_timeout = '10ms';
step checker_sleep: SELECT pg_sleep(0.1);
pg_sleep
--------
(1 row)
step s6_check: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s6';
count
-----
0
(1 row)

View File

@ -1,4 +1,4 @@
# Simple tests for statement_timeout and lock_timeout features
# Simple tests for statement_timeout, lock_timeout and transaction_timeout features
setup
{
@ -27,6 +27,33 @@ step locktbl { LOCK TABLE accounts; }
step update { DELETE FROM accounts WHERE accountid = 'checking'; }
teardown { ABORT; }
session s3
step s3_begin { BEGIN ISOLATION LEVEL READ COMMITTED; }
step stto { SET statement_timeout = '10ms'; SET transaction_timeout = '1s'; }
step tsto { SET statement_timeout = '1s'; SET transaction_timeout = '10ms'; }
step s3_sleep { SELECT pg_sleep(0.1); }
step s3_abort { ABORT; }
session s4
step s4_begin { BEGIN ISOLATION LEVEL READ COMMITTED; }
step itto { SET idle_in_transaction_session_timeout = '10ms'; SET transaction_timeout = '1s'; }
session s5
step s5_begin { BEGIN ISOLATION LEVEL READ COMMITTED; }
step tito { SET idle_in_transaction_session_timeout = '1s'; SET transaction_timeout = '10ms'; }
session s6
step s6_begin { BEGIN ISOLATION LEVEL READ COMMITTED; }
step s6_tt { SET statement_timeout = '1s'; SET transaction_timeout = '10ms'; }
session checker
step checker_sleep { SELECT pg_sleep(0.1); }
step s3_check { SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s3'; }
step s4_check { SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s4'; }
step s5_check { SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s5'; }
step s6_check { SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s6'; }
# It's possible that the isolation tester will not observe the final
# steps as "waiting", thanks to the relatively short timeouts we use.
# We can ensure consistent test output by marking those steps with (*).
@ -47,3 +74,14 @@ permutation wrtbl lto update(*)
permutation wrtbl lsto update(*)
# statement timeout expires first, row-level lock
permutation wrtbl slto update(*)
# statement timeout expires first
permutation stto s3_begin s3_sleep s3_check s3_abort
# transaction timeout expires first, session s3 FATAL-out
permutation tsto s3_begin checker_sleep s3_check
# idle in transaction timeout expires first, session s4 FATAL-out
permutation itto s4_begin checker_sleep s4_check
# transaction timeout expires first, session s5 FATAL-out
permutation tito s5_begin checker_sleep s5_check
# transaction timeout can be schedule amid transaction, session s6 FATAL-out
permutation s6_begin s6_tt checker_sleep s6_check

View File

@ -114,6 +114,7 @@ use Socket;
use Test::More;
use PostgreSQL::Test::Utils ();
use PostgreSQL::Test::BackgroundPsql ();
use Text::ParseWords qw(shellwords);
use Time::HiRes qw(usleep);
use Scalar::Util qw(blessed);
@ -519,6 +520,12 @@ sub init
$params{allows_streaming} = 0 unless defined $params{allows_streaming};
$params{has_archiving} = 0 unless defined $params{has_archiving};
my $initdb_extra_opts_env = $ENV{PG_TEST_INITDB_EXTRA_OPTS};
if (defined $initdb_extra_opts_env)
{
push @{ $params{extra} }, shellwords($initdb_extra_opts_env);
}
mkdir $self->backup_dir;
mkdir $self->archive_dir;

View File

@ -130,14 +130,20 @@ $standby1->init_from_backup(
has_streaming => 1,
has_restoring => 1);
# Increase the log_min_messages setting to DEBUG2 on both the standby and
# primary to debug test failures, if any.
my $connstr_1 = $primary->connstr;
$standby1->append_conf(
'postgresql.conf', qq(
hot_standby_feedback = on
primary_slot_name = 'sb1_slot'
primary_conninfo = '$connstr_1 dbname=postgres'
log_min_messages = 'debug2'
));
$primary->append_conf('postgresql.conf', "log_min_messages = 'debug2'");
$primary->reload;
$primary->psql('postgres',
q{SELECT pg_create_logical_replication_slot('lsub2_slot', 'test_decoding', false, false, true);}
);
@ -223,17 +229,14 @@ is( $standby1->safe_psql(
$standby1->append_conf('postgresql.conf', 'max_slot_wal_keep_size = -1');
$standby1->reload;
# Enable the subscription to let it catch up to the latest wal position
$subscriber1->safe_psql('postgres',
"ALTER SUBSCRIPTION regress_mysub1 ENABLE");
# To ensure that restart_lsn has moved to a recent WAL position, we re-create
# the subscription and the logical slot.
$subscriber1->safe_psql(
'postgres', qq[
DROP SUBSCRIPTION regress_mysub1;
CREATE SUBSCRIPTION regress_mysub1 CONNECTION '$publisher_connstr' PUBLICATION regress_mypub WITH (slot_name = lsub1_slot, copy_data = false, failover = true);
]);
# This wait ensures that confirmed_flush_lsn has been moved to latest
# position.
$primary->wait_for_catchup('regress_mysub1');
# To ensure that restart_lsn has moved to a recent WAL position, we need
# to log XLOG_RUNNING_XACTS and make sure the same is processed as well
$primary->psql('postgres', "CHECKPOINT");
$primary->wait_for_catchup('regress_mysub1');
# Do not allow any further advancement of the restart_lsn for the lsub1_slot.
@ -268,6 +271,13 @@ is( $standby1->safe_psql(
"t",
'logical slot is re-synced');
# Reset the log_min_messages to the default value.
$primary->append_conf('postgresql.conf', "log_min_messages = 'warning'");
$primary->reload;
$standby1->append_conf('postgresql.conf', "log_min_messages = 'warning'");
$standby1->reload;
##################################################
# Test that a synchronized slot can not be decoded, altered or dropped by the
# user

View File

@ -5277,7 +5277,7 @@ reset enable_nestloop;
explain (costs off)
select a.unique1, b.unique2
from onek a left join onek b on a.unique1 = b.unique2
where b.unique2 = any (select q1 from int8_tbl c where c.q1 < b.unique1);
where (b.unique2, random() > 0) = any (select q1, random() > 0 from int8_tbl c where c.q1 < b.unique1);
QUERY PLAN
----------------------------------------------------------
Hash Join
@ -5293,7 +5293,7 @@ select a.unique1, b.unique2
select a.unique1, b.unique2
from onek a left join onek b on a.unique1 = b.unique2
where b.unique2 = any (select q1 from int8_tbl c where c.q1 < b.unique1);
where (b.unique2, random() > 0) = any (select q1, random() > 0 from int8_tbl c where c.q1 < b.unique1);
unique1 | unique2
---------+---------
123 | 123
@ -8210,12 +8210,12 @@ select * from (values (0), (1)) v(id),
lateral (select * from int8_tbl t1,
lateral (select * from
(select * from int8_tbl t2
where q1 = any (select q2 from int8_tbl t3
where (q1, random() > 0) = any (select q2, random() > 0 from int8_tbl t3
where q2 = (select greatest(t1.q1,t2.q2))
and (select v.id=0)) offset 0) ss2) ss
where t1.q1 = ss.q2) ss0;
QUERY PLAN
----------------------------------------------------------------------
QUERY PLAN
-------------------------------------------------------------------------------
Nested Loop
Output: "*VALUES*".column1, t1.q1, t1.q2, ss2.q1, ss2.q2
-> Seq Scan on public.int8_tbl t1
@ -8232,7 +8232,7 @@ lateral (select * from int8_tbl t1,
Filter: (SubPlan 3)
SubPlan 3
-> Result
Output: t3.q2
Output: t3.q2, (random() > '0'::double precision)
One-Time Filter: $4
InitPlan 1 (returns $2)
-> Result
@ -8249,7 +8249,7 @@ select * from (values (0), (1)) v(id),
lateral (select * from int8_tbl t1,
lateral (select * from
(select * from int8_tbl t2
where q1 = any (select q2 from int8_tbl t3
where (q1, random() > 0) = any (select q2, random() > 0 from int8_tbl t3
where q2 = (select greatest(t1.q1,t2.q2))
and (select v.id=0)) offset 0) ss2) ss
where t1.q1 = ss.q2) ss0;

View File

@ -1926,3 +1926,129 @@ select * from x for update;
Output: subselect_tbl.f1, subselect_tbl.f2, subselect_tbl.f3
(2 rows)
-- Pull-up the direct-correlated ANY_SUBLINK
explain (costs off)
select * from tenk1 A where hundred in (select hundred from tenk2 B where B.odd = A.odd);
QUERY PLAN
------------------------------------------------------------
Hash Join
Hash Cond: ((a.odd = b.odd) AND (a.hundred = b.hundred))
-> Seq Scan on tenk1 a
-> Hash
-> HashAggregate
Group Key: b.odd, b.hundred
-> Seq Scan on tenk2 b
(7 rows)
explain (costs off)
select * from tenk1 A where exists
(select 1 from tenk2 B
where A.hundred in (select C.hundred FROM tenk2 C
WHERE c.odd = b.odd));
QUERY PLAN
---------------------------------
Nested Loop Semi Join
Join Filter: (SubPlan 1)
-> Seq Scan on tenk1 a
-> Materialize
-> Seq Scan on tenk2 b
SubPlan 1
-> Seq Scan on tenk2 c
Filter: (odd = b.odd)
(8 rows)
-- we should only try to pull up the sublink into RHS of a left join
-- but a.hundred is not avaiable.
explain (costs off)
SELECT * FROM tenk1 A LEFT JOIN tenk2 B
ON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);
QUERY PLAN
---------------------------------
Nested Loop Left Join
Join Filter: (SubPlan 1)
-> Seq Scan on tenk1 a
-> Materialize
-> Seq Scan on tenk2 b
SubPlan 1
-> Seq Scan on tenk2 c
Filter: (odd = b.odd)
(8 rows)
-- we should only try to pull up the sublink into RHS of a left join
-- but a.odd is not avaiable for this.
explain (costs off)
SELECT * FROM tenk1 A LEFT JOIN tenk2 B
ON B.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = a.odd);
QUERY PLAN
---------------------------------
Nested Loop Left Join
Join Filter: (SubPlan 1)
-> Seq Scan on tenk1 a
-> Materialize
-> Seq Scan on tenk2 b
SubPlan 1
-> Seq Scan on tenk2 c
Filter: (odd = a.odd)
(8 rows)
-- should be able to pull up since all the references is available
explain (costs off)
SELECT * FROM tenk1 A LEFT JOIN tenk2 B
ON B.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);
QUERY PLAN
------------------------------------------------------------------------
Nested Loop Left Join
-> Seq Scan on tenk1 a
-> Materialize
-> Hash Join
Hash Cond: ((b.odd = c.odd) AND (b.hundred = c.hundred))
-> Seq Scan on tenk2 b
-> Hash
-> HashAggregate
Group Key: c.odd, c.hundred
-> Seq Scan on tenk2 c
(10 rows)
-- we can pull up the sublink into the inner JoinExpr.
explain (costs off)
SELECT * FROM tenk1 A INNER JOIN tenk2 B
ON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);
QUERY PLAN
-------------------------------------------------
Hash Join
Hash Cond: (c.odd = b.odd)
-> Hash Join
Hash Cond: (a.hundred = c.hundred)
-> Seq Scan on tenk1 a
-> Hash
-> HashAggregate
Group Key: c.odd, c.hundred
-> Seq Scan on tenk2 c
-> Hash
-> Seq Scan on tenk2 b
(11 rows)
-- we can pull up the aggregate sublink into RHS of a left join.
explain (costs off)
SELECT * FROM tenk1 A LEFT JOIN tenk2 B
ON B.hundred in (SELECT min(c.hundred) FROM tenk2 C WHERE c.odd = b.odd);
QUERY PLAN
---------------------------------------------------------------------------------------
Nested Loop Left Join
-> Seq Scan on tenk1 a
-> Materialize
-> Nested Loop
-> Seq Scan on tenk2 b
-> Memoize
Cache Key: b.hundred, b.odd
Cache Mode: binary
-> Subquery Scan on "ANY_subquery"
Filter: (b.hundred = "ANY_subquery".min)
-> Result
InitPlan 1 (returns $1)
-> Limit
-> Index Scan using tenk2_hundred on tenk2 c
Index Cond: (hundred IS NOT NULL)
Filter: (odd = b.odd)
(16 rows)

View File

@ -1438,7 +1438,7 @@ where (x = 0) or (q1 >= q2 and q1 <= q2);
-- Ensure we get a Nested Loop join between tenk1 and tenk2
explain (costs off)
select t1.unique1 from tenk1 t1
inner join tenk2 t2 on t1.tenthous = t2.tenthous
inner join tenk2 t2 on t1.tenthous = t2.tenthous and t2.thousand = 0
union all
(values(1)) limit 1;
QUERY PLAN
@ -1450,8 +1450,9 @@ inner join tenk2 t2 on t1.tenthous = t2.tenthous
-> Seq Scan on tenk1 t1
-> Materialize
-> Seq Scan on tenk2 t2
Filter: (thousand = 0)
-> Result
(8 rows)
(9 rows)
-- Ensure there is no problem if cheapest_startup_path is NULL
explain (costs off)

View File

@ -2306,6 +2306,7 @@ regression_main(int argc, char *argv[],
const char *keywords[4];
const char *values[4];
PGPing rv;
const char *initdb_extra_opts_env;
/*
* Prepare the temp instance
@ -2327,6 +2328,8 @@ regression_main(int argc, char *argv[],
if (!directory_exists(buf))
make_directory(buf);
initdb_extra_opts_env = getenv("PG_TEST_INITDB_EXTRA_OPTS");
initStringInfo(&cmd);
/*
@ -2339,7 +2342,7 @@ regression_main(int argc, char *argv[],
* duplicate it until we require perl at build time.
*/
initdb_template_dir = getenv("INITDB_TEMPLATE");
if (initdb_template_dir == NULL || nolocale || debug)
if (initdb_template_dir == NULL || nolocale || debug || initdb_extra_opts_env)
{
note("initializing database system by running initdb");
@ -2352,6 +2355,8 @@ regression_main(int argc, char *argv[],
appendStringInfoString(&cmd, " --debug");
if (nolocale)
appendStringInfoString(&cmd, " --no-locale");
if (initdb_extra_opts_env)
appendStringInfo(&cmd, " %s", initdb_extra_opts_env);
appendStringInfo(&cmd, " > \"%s/log/initdb.log\" 2>&1", outputdir);
fflush(NULL);
if (system(cmd.data))

View File

@ -1864,11 +1864,11 @@ reset enable_nestloop;
explain (costs off)
select a.unique1, b.unique2
from onek a left join onek b on a.unique1 = b.unique2
where b.unique2 = any (select q1 from int8_tbl c where c.q1 < b.unique1);
where (b.unique2, random() > 0) = any (select q1, random() > 0 from int8_tbl c where c.q1 < b.unique1);
select a.unique1, b.unique2
from onek a left join onek b on a.unique1 = b.unique2
where b.unique2 = any (select q1 from int8_tbl c where c.q1 < b.unique1);
where (b.unique2, random() > 0) = any (select q1, random() > 0 from int8_tbl c where c.q1 < b.unique1);
--
-- test full-join strength reduction
@ -3038,7 +3038,7 @@ select * from (values (0), (1)) v(id),
lateral (select * from int8_tbl t1,
lateral (select * from
(select * from int8_tbl t2
where q1 = any (select q2 from int8_tbl t3
where (q1, random() > 0) = any (select q2, random() > 0 from int8_tbl t3
where q2 = (select greatest(t1.q1,t2.q2))
and (select v.id=0)) offset 0) ss2) ss
where t1.q1 = ss.q2) ss0;
@ -3047,7 +3047,7 @@ select * from (values (0), (1)) v(id),
lateral (select * from int8_tbl t1,
lateral (select * from
(select * from int8_tbl t2
where q1 = any (select q2 from int8_tbl t3
where (q1, random() > 0) = any (select q2, random() > 0 from int8_tbl t3
where q2 = (select greatest(t1.q1,t2.q2))
and (select v.id=0)) offset 0) ss2) ss
where t1.q1 = ss.q2) ss0;

View File

@ -968,3 +968,40 @@ select * from (with x as (select 2 as y) select * from x) ss;
explain (verbose, costs off)
with x as (select * from subselect_tbl)
select * from x for update;
-- Pull-up the direct-correlated ANY_SUBLINK
explain (costs off)
select * from tenk1 A where hundred in (select hundred from tenk2 B where B.odd = A.odd);
explain (costs off)
select * from tenk1 A where exists
(select 1 from tenk2 B
where A.hundred in (select C.hundred FROM tenk2 C
WHERE c.odd = b.odd));
-- we should only try to pull up the sublink into RHS of a left join
-- but a.hundred is not avaiable.
explain (costs off)
SELECT * FROM tenk1 A LEFT JOIN tenk2 B
ON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);
-- we should only try to pull up the sublink into RHS of a left join
-- but a.odd is not avaiable for this.
explain (costs off)
SELECT * FROM tenk1 A LEFT JOIN tenk2 B
ON B.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = a.odd);
-- should be able to pull up since all the references is available
explain (costs off)
SELECT * FROM tenk1 A LEFT JOIN tenk2 B
ON B.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);
-- we can pull up the sublink into the inner JoinExpr.
explain (costs off)
SELECT * FROM tenk1 A INNER JOIN tenk2 B
ON A.hundred in (SELECT c.hundred FROM tenk2 C WHERE c.odd = b.odd);
-- we can pull up the aggregate sublink into RHS of a left join.
explain (costs off)
SELECT * FROM tenk1 A LEFT JOIN tenk2 B
ON B.hundred in (SELECT min(c.hundred) FROM tenk2 C WHERE c.odd = b.odd);

View File

@ -548,7 +548,7 @@ where (x = 0) or (q1 >= q2 and q1 <= q2);
-- Ensure we get a Nested Loop join between tenk1 and tenk2
explain (costs off)
select t1.unique1 from tenk1 t1
inner join tenk2 t2 on t1.tenthous = t2.tenthous
inner join tenk2 t2 on t1.tenthous = t2.tenthous and t2.thousand = 0
union all
(values(1)) limit 1;