Compare commits

..

No commits in common. "3ada0d2cae4d9d3e045c72e3ee0b37ccb6e13902" and "2f35c14cfb3dadede883a7d8f458e5a15f13a97b" have entirely different histories.

30 changed files with 391 additions and 673 deletions

View File

@ -95,7 +95,7 @@ heap_force_common(FunctionCallInfo fcinfo, HeapTupleForceOption heap_force_opt)
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
errmsg("recovery is in progress"),
errhint("Heap surgery functions cannot be executed during recovery.")));
errhint("heap surgery functions cannot be executed during recovery.")));
/* Check inputs. */
sanity_check_tid_array(ta, &ntids);

View File

@ -247,8 +247,7 @@
</para>
<para>
This role also behaves as a normal
<glossterm linkend="glossary-database-superuser">database superuser</glossterm>,
and its superuser status cannot be removed.
<glossterm linkend="glossary-database-superuser">database superuser</glossterm>.
</para>
</glossdef>
</glossentry>
@ -894,28 +893,6 @@
</para>
</glossdef>
</glossentry>
<glossentry id="glossary-incremental-backup">
<glossterm>Incremental backup</glossterm>
<glossdef>
<para>
A special <glossterm linkend="glossary-basebackup">base backup</glossterm>
that for some files may contain only those pages that were modified since
a previous backup, as opposed to the full contents of every file. Like
base backups, it is generated by the tool <xref linkend="app-pgbasebackup"/>.
</para>
<para>
To restore incremental backups the tool <xref linkend="app-pgcombinebackup"/>
is used, which combines incremental backups with a base backup.
Afterwards, recovery can use
<glossterm linkend="glossary-wal">WAL</glossterm> to bring the
<glossterm linkend="glossary-db-cluster">database cluster</glossterm> to
a consistent state.
</para>
<para>
For more information, see <xref linkend="backup-incremental-backup"/>.
</para>
</glossdef>
</glossentry>
<glossentry id="glossary-insert">
<glossterm>Insert</glossterm>
@ -2180,20 +2157,6 @@
</glossdef>
</glossentry>
<glossentry id="glossary-wal-summarizer">
<glossterm>WAL summarizer (process)</glossterm>
<glossdef>
<para>
A special <glossterm linkend="glossary-backend">backend process</glossterm>
that summarizes WAL data for
<glossterm linkend="glossary-incremental-backup">incremental backups</glossterm>.
</para>
<para>
For more information, see <xref linkend="runtime-config-wal-summarization"/>.
</para>
</glossdef>
</glossentry>
<glossentry id="glossary-wal-writer">
<glossterm>WAL writer (process)</glossterm>
<glossdef>

View File

@ -924,8 +924,7 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
<secondary>streaming replication</secondary>
</indexterm>
<para>
Replication slots provide an automated way to ensure that the
primary server does
Replication slots provide an automated way to ensure that the primary does
not remove WAL segments until they have been received by all standbys,
and that the primary does not remove rows which could cause a
<link linkend="hot-standby-conflict">recovery conflict</link> even when the
@ -936,28 +935,21 @@ primary_conninfo = 'host=192.168.1.50 port=5432 user=foo password=foopass'
of old WAL segments using <xref linkend="guc-wal-keep-size"/>, or by
storing the segments in an archive using
<xref linkend="guc-archive-command"/> or <xref linkend="guc-archive-library"/>.
A disadvantage of these methods is that they
often result in retaining more WAL segments than
However, these methods often result in retaining more WAL segments than
required, whereas replication slots retain only the number of segments
known to be needed.
known to be needed. On the other hand, replication slots can retain so
many WAL segments that they fill up the space allocated
for <literal>pg_wal</literal>;
<xref linkend="guc-max-slot-wal-keep-size"/> limits the size of WAL files
retained by replication slots.
</para>
<para>
Similarly, <xref linkend="guc-hot-standby-feedback"/> on its own, without
also using a replication slot, provides protection against relevant rows
being removed by vacuum, but provides no protection during any time period
when the standby is not connected.
when the standby is not connected. Replication slots overcome these
disadvantages.
</para>
<caution>
<para>
Beware that replication slots can cause the server to retain so
many WAL segments that they fill up the space allocated for
<literal>pg_wal</literal>.
<xref linkend="guc-max-slot-wal-keep-size"/> can be used to limit the size
of WAL files retained by replication slots.
</para>
</caution>
<sect3 id="streaming-replication-slots-manipulation">
<title>Querying and Manipulating Replication Slots</title>
<para>

View File

@ -999,8 +999,7 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
<literal>client backend</literal>, <literal>checkpointer</literal>,
<literal>archiver</literal>, <literal>standalone backend</literal>,
<literal>startup</literal>, <literal>walreceiver</literal>,
<literal>walsender</literal>, <literal>walwriter</literal> and
<literal>walsummarizer</literal>.
<literal>walsender</literal> and <literal>walwriter</literal>.
In addition, background workers registered by extensions may have
additional types.
</para></entry>

View File

@ -69,9 +69,7 @@ ALTER ROLE { <replaceable class="parameter">role_specification</replaceable> | A
<link linkend="sql-grant"><command>GRANT</command></link> and
<link linkend="sql-revoke"><command>REVOKE</command></link> for that.)
Attributes not mentioned in the command retain their previous settings.
Database superusers can change any of these settings for any role, except
for changing the <literal>SUPERUSER</literal> property for the
<glossterm linkend="glossary-bootstrap-superuser">bootstrap superuser</glossterm>.
Database superusers can change any of these settings for any role.
Non-superuser roles having <literal>CREATEROLE</literal> privilege can
change most of these properties, but only for non-superuser and
non-replication roles for which they have been granted

View File

@ -350,7 +350,7 @@ ALTER ROLE myname SET enable_indexscan TO off;
options. Thus, the fact that privileges are not inherited by default nor
is <literal>SET ROLE</literal> granted by default is a safeguard against
accidents, not a security feature. Also note that, because this automatic
grant is granted by the bootstrap superuser, it cannot be removed or changed by
grant is granted by the bootstrap user, it cannot be removed or changed by
the <literal>CREATEROLE</literal> user; however, any superuser could
revoke it, modify it, and/or issue additional such grants to other
<literal>CREATEROLE</literal> users. Whichever <literal>CREATEROLE</literal>

View File

@ -35,8 +35,6 @@ typedef struct
/* tuple visibility test, initialized for the relation */
GlobalVisState *vistest;
/* whether or not dead items can be set LP_UNUSED during pruning */
bool mark_unused_now;
TransactionId new_prune_xid; /* new prune hint value for page */
TransactionId snapshotConflictHorizon; /* latest xid removed */
@ -69,7 +67,6 @@ static void heap_prune_record_prunable(PruneState *prstate, TransactionId xid);
static void heap_prune_record_redirect(PruneState *prstate,
OffsetNumber offnum, OffsetNumber rdoffnum);
static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
static void heap_prune_record_dead_or_unused(PruneState *prstate, OffsetNumber offnum);
static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
static void page_verify_redirects(Page page);
@ -151,13 +148,7 @@ heap_page_prune_opt(Relation relation, Buffer buffer)
{
PruneResult presult;
/*
* For now, pass mark_unused_now as false regardless of whether or
* not the relation has indexes, since we cannot safely determine
* that during on-access pruning with the current implementation.
*/
heap_page_prune(relation, buffer, vistest, false,
&presult, NULL);
heap_page_prune(relation, buffer, vistest, &presult, NULL);
/*
* Report the number of tuples reclaimed to pgstats. This is
@ -202,9 +193,6 @@ heap_page_prune_opt(Relation relation, Buffer buffer)
* (see heap_prune_satisfies_vacuum and
* HeapTupleSatisfiesVacuum).
*
* mark_unused_now indicates whether or not dead items can be set LP_UNUSED during
* pruning.
*
* off_loc is the offset location required by the caller to use in error
* callback.
*
@ -215,7 +203,6 @@ heap_page_prune_opt(Relation relation, Buffer buffer)
void
heap_page_prune(Relation relation, Buffer buffer,
GlobalVisState *vistest,
bool mark_unused_now,
PruneResult *presult,
OffsetNumber *off_loc)
{
@ -240,7 +227,6 @@ heap_page_prune(Relation relation, Buffer buffer,
prstate.new_prune_xid = InvalidTransactionId;
prstate.rel = relation;
prstate.vistest = vistest;
prstate.mark_unused_now = mark_unused_now;
prstate.snapshotConflictHorizon = InvalidTransactionId;
prstate.nredirected = prstate.ndead = prstate.nunused = 0;
memset(prstate.marked, 0, sizeof(prstate.marked));
@ -320,9 +306,9 @@ heap_page_prune(Relation relation, Buffer buffer,
if (off_loc)
*off_loc = offnum;
/* Nothing to do if slot is empty */
/* Nothing to do if slot is empty or already dead */
itemid = PageGetItemId(page, offnum);
if (!ItemIdIsUsed(itemid))
if (!ItemIdIsUsed(itemid) || ItemIdIsDead(itemid))
continue;
/* Process this item or chain of items */
@ -595,17 +581,7 @@ heap_prune_chain(Buffer buffer, OffsetNumber rootoffnum,
* function.)
*/
if (ItemIdIsDead(lp))
{
/*
* If the caller set mark_unused_now true, we can set dead line
* pointers LP_UNUSED now. We don't increment ndeleted here since
* the LP was already marked dead.
*/
if (unlikely(prstate->mark_unused_now))
heap_prune_record_unused(prstate, offnum);
break;
}
Assert(ItemIdIsNormal(lp));
htup = (HeapTupleHeader) PageGetItem(dp, lp);
@ -739,7 +715,7 @@ heap_prune_chain(Buffer buffer, OffsetNumber rootoffnum,
* redirect the root to the correct chain member.
*/
if (i >= nchain)
heap_prune_record_dead_or_unused(prstate, rootoffnum);
heap_prune_record_dead(prstate, rootoffnum);
else
heap_prune_record_redirect(prstate, rootoffnum, chainitems[i]);
}
@ -750,9 +726,9 @@ heap_prune_chain(Buffer buffer, OffsetNumber rootoffnum,
* item. This can happen if the loop in heap_page_prune caused us to
* visit the dead successor of a redirect item before visiting the
* redirect item. We can clean up by setting the redirect item to
* DEAD state or LP_UNUSED if the caller indicated.
* DEAD state.
*/
heap_prune_record_dead_or_unused(prstate, rootoffnum);
heap_prune_record_dead(prstate, rootoffnum);
}
return ndeleted;
@ -798,27 +774,6 @@ heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum)
prstate->marked[offnum] = true;
}
/*
* Depending on whether or not the caller set mark_unused_now to true, record that a
* line pointer should be marked LP_DEAD or LP_UNUSED. There are other cases in
* which we will mark line pointers LP_UNUSED, but we will not mark line
* pointers LP_DEAD if mark_unused_now is true.
*/
static void
heap_prune_record_dead_or_unused(PruneState *prstate, OffsetNumber offnum)
{
/*
* If the caller set mark_unused_now to true, we can remove dead tuples
* during pruning instead of marking their line pointers dead. Set this
* tuple's line pointer LP_UNUSED. We hint that this option is less
* likely.
*/
if (unlikely(prstate->mark_unused_now))
heap_prune_record_unused(prstate, offnum);
else
heap_prune_record_dead(prstate, offnum);
}
/* Record line pointer to be marked unused */
static void
heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
@ -948,24 +903,13 @@ heap_page_prune_execute(Buffer buffer,
#ifdef USE_ASSERT_CHECKING
/*
* When heap_page_prune() was called, mark_unused_now may have been
* passed as true, which allows would-be LP_DEAD items to be made
* LP_UNUSED instead. This is only possible if the relation has no
* indexes. If there are any dead items, then mark_unused_now was not
* true and every item being marked LP_UNUSED must refer to a
* heap-only tuple.
* Only heap-only tuples can become LP_UNUSED during pruning. They
* don't need to be left in place as LP_DEAD items until VACUUM gets
* around to doing index vacuuming.
*/
if (ndead > 0)
{
Assert(ItemIdHasStorage(lp) && ItemIdIsNormal(lp));
htup = (HeapTupleHeader) PageGetItem(page, lp);
Assert(HeapTupleHeaderIsHeapOnly(htup));
}
else
{
Assert(ItemIdIsUsed(lp));
}
Assert(ItemIdHasStorage(lp) && ItemIdIsNormal(lp));
htup = (HeapTupleHeader) PageGetItem(page, lp);
Assert(HeapTupleHeaderIsHeapOnly(htup));
#endif
ItemIdSetUnused(lp);

View File

@ -212,6 +212,23 @@ typedef struct LVRelState
int64 missed_dead_tuples; /* # removable, but not removed */
} LVRelState;
/*
* State returned by lazy_scan_prune()
*/
typedef struct LVPagePruneState
{
bool has_lpdead_items; /* includes existing LP_DEAD items */
/*
* State describes the proper VM bit states to set for the page following
* pruning and freezing. all_visible implies !has_lpdead_items, but don't
* trust all_frozen result unless all_visible is also set to true.
*/
bool all_visible; /* Every item visible to all? */
bool all_frozen; /* provided all_visible is also true */
TransactionId visibility_cutoff_xid; /* For recovery conflicts */
} LVPagePruneState;
/* Struct for saving and restoring vacuum error information. */
typedef struct LVSavedErrInfo
{
@ -232,8 +249,7 @@ static bool lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf,
bool sharelock, Buffer vmbuffer);
static void lazy_scan_prune(LVRelState *vacrel, Buffer buf,
BlockNumber blkno, Page page,
Buffer vmbuffer, bool all_visible_according_to_vm,
bool *has_lpdead_items);
LVPagePruneState *prunestate);
static bool lazy_scan_noprune(LVRelState *vacrel, Buffer buf,
BlockNumber blkno, Page page,
bool *has_lpdead_items);
@ -837,7 +853,7 @@ lazy_scan_heap(LVRelState *vacrel)
Buffer buf;
Page page;
bool all_visible_according_to_vm;
bool has_lpdead_items;
LVPagePruneState prunestate;
if (blkno == next_unskippable_block)
{
@ -942,6 +958,8 @@ lazy_scan_heap(LVRelState *vacrel)
page = BufferGetPage(buf);
if (!ConditionalLockBufferForCleanup(buf))
{
bool has_lpdead_items;
LockBuffer(buf, BUFFER_LOCK_SHARE);
/* Check for new or empty pages before lazy_scan_noprune call */
@ -1014,51 +1032,215 @@ lazy_scan_heap(LVRelState *vacrel)
* tuple headers of remaining items with storage. It also determines
* if truncating this block is safe.
*/
lazy_scan_prune(vacrel, buf, blkno, page,
vmbuffer, all_visible_according_to_vm,
&has_lpdead_items);
lazy_scan_prune(vacrel, buf, blkno, page, &prunestate);
Assert(!prunestate.all_visible || !prunestate.has_lpdead_items);
if (vacrel->nindexes == 0)
{
/*
* Consider the need to do page-at-a-time heap vacuuming when
* using the one-pass strategy now.
*
* The one-pass strategy will never call lazy_vacuum(). The steps
* performed here can be thought of as the one-pass equivalent of
* a call to lazy_vacuum().
*/
if (prunestate.has_lpdead_items)
{
Size freespace;
lazy_vacuum_heap_page(vacrel, blkno, buf, 0, vmbuffer);
/* Forget the LP_DEAD items that we just vacuumed */
dead_items->num_items = 0;
/*
* Now perform FSM processing for blkno, and move on to next
* page.
*
* Our call to lazy_vacuum_heap_page() will have considered if
* it's possible to set all_visible/all_frozen independently
* of lazy_scan_prune(). Note that prunestate was invalidated
* by lazy_vacuum_heap_page() call.
*/
freespace = PageGetHeapFreeSpace(page);
UnlockReleaseBuffer(buf);
RecordPageWithFreeSpace(vacrel->rel, blkno, freespace);
/*
* Periodically perform FSM vacuuming to make newly-freed
* space visible on upper FSM pages. FreeSpaceMapVacuumRange()
* vacuums the portion of the freespace map covering heap
* pages from start to end - 1. Include the block we just
* vacuumed by passing it blkno + 1. Overflow isn't an issue
* because MaxBlockNumber + 1 is InvalidBlockNumber which
* causes FreeSpaceMapVacuumRange() to vacuum freespace map
* pages covering the remainder of the relation.
*/
if (blkno - next_fsm_block_to_vacuum >= VACUUM_FSM_EVERY_PAGES)
{
FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum,
blkno + 1);
next_fsm_block_to_vacuum = blkno + 1;
}
continue;
}
/*
* There was no call to lazy_vacuum_heap_page() because pruning
* didn't encounter/create any LP_DEAD items that needed to be
* vacuumed. Prune state has not been invalidated, so proceed
* with prunestate-driven visibility map and FSM steps (just like
* the two-pass strategy).
*/
Assert(dead_items->num_items == 0);
}
/*
* Handle setting visibility map bit based on information from the VM
* (as of last lazy_scan_skip() call), and from prunestate
*/
if (!all_visible_according_to_vm && prunestate.all_visible)
{
uint8 flags = VISIBILITYMAP_ALL_VISIBLE;
if (prunestate.all_frozen)
{
Assert(!TransactionIdIsValid(prunestate.visibility_cutoff_xid));
flags |= VISIBILITYMAP_ALL_FROZEN;
}
/*
* It should never be the case that the visibility map page is set
* while the page-level bit is clear, but the reverse is allowed
* (if checksums are not enabled). Regardless, set both bits so
* that we get back in sync.
*
* NB: If the heap page is all-visible but the VM bit is not set,
* we don't need to dirty the heap page. However, if checksums
* are enabled, we do need to make sure that the heap page is
* dirtied before passing it to visibilitymap_set(), because it
* may be logged. Given that this situation should only happen in
* rare cases after a crash, it is not worth optimizing.
*/
PageSetAllVisible(page);
MarkBufferDirty(buf);
visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
vmbuffer, prunestate.visibility_cutoff_xid,
flags);
}
/*
* As of PostgreSQL 9.2, the visibility map bit should never be set if
* the page-level bit is clear. However, it's possible that the bit
* got cleared after lazy_scan_skip() was called, so we must recheck
* with buffer lock before concluding that the VM is corrupt.
*/
else if (all_visible_according_to_vm && !PageIsAllVisible(page) &&
visibilitymap_get_status(vacrel->rel, blkno, &vmbuffer) != 0)
{
elog(WARNING, "page is not marked all-visible but visibility map bit is set in relation \"%s\" page %u",
vacrel->relname, blkno);
visibilitymap_clear(vacrel->rel, blkno, vmbuffer,
VISIBILITYMAP_VALID_BITS);
}
/*
* It's possible for the value returned by
* GetOldestNonRemovableTransactionId() to move backwards, so it's not
* wrong for us to see tuples that appear to not be visible to
* everyone yet, while PD_ALL_VISIBLE is already set. The real safe
* xmin value never moves backwards, but
* GetOldestNonRemovableTransactionId() is conservative and sometimes
* returns a value that's unnecessarily small, so if we see that
* contradiction it just means that the tuples that we think are not
* visible to everyone yet actually are, and the PD_ALL_VISIBLE flag
* is correct.
*
* There should never be LP_DEAD items on a page with PD_ALL_VISIBLE
* set, however.
*/
else if (prunestate.has_lpdead_items && PageIsAllVisible(page))
{
elog(WARNING, "page containing LP_DEAD items is marked as all-visible in relation \"%s\" page %u",
vacrel->relname, blkno);
PageClearAllVisible(page);
MarkBufferDirty(buf);
visibilitymap_clear(vacrel->rel, blkno, vmbuffer,
VISIBILITYMAP_VALID_BITS);
}
/*
* If the all-visible page is all-frozen but not marked as such yet,
* mark it as all-frozen. Note that all_frozen is only valid if
* all_visible is true, so we must check both prunestate fields.
*/
else if (all_visible_according_to_vm && prunestate.all_visible &&
prunestate.all_frozen &&
!VM_ALL_FROZEN(vacrel->rel, blkno, &vmbuffer))
{
/*
* Avoid relying on all_visible_according_to_vm as a proxy for the
* page-level PD_ALL_VISIBLE bit being set, since it might have
* become stale -- even when all_visible is set in prunestate
*/
if (!PageIsAllVisible(page))
{
PageSetAllVisible(page);
MarkBufferDirty(buf);
}
/*
* Set the page all-frozen (and all-visible) in the VM.
*
* We can pass InvalidTransactionId as our visibility_cutoff_xid,
* since a snapshotConflictHorizon sufficient to make everything
* safe for REDO was logged when the page's tuples were frozen.
*/
Assert(!TransactionIdIsValid(prunestate.visibility_cutoff_xid));
visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
vmbuffer, InvalidTransactionId,
VISIBILITYMAP_ALL_VISIBLE |
VISIBILITYMAP_ALL_FROZEN);
}
/*
* Final steps for block: drop cleanup lock, record free space in the
* FSM.
*
* If we will likely do index vacuuming, wait until
* lazy_vacuum_heap_rel() to save free space. This doesn't just save
* us some cycles; it also allows us to record any additional free
* space that lazy_vacuum_heap_page() will make available in cases
* where it's possible to truncate the page's line pointer array.
*
* Note: It's not in fact 100% certain that we really will call
* lazy_vacuum_heap_rel() -- lazy_vacuum() might yet opt to skip index
* vacuuming (and so must skip heap vacuuming). This is deemed okay
* because it only happens in emergencies, or when there is very
* little free space anyway. (Besides, we start recording free space
* in the FSM once index vacuuming has been abandoned.)
* FSM
*/
if (vacrel->nindexes == 0
|| !vacrel->do_index_vacuuming
|| !has_lpdead_items)
if (prunestate.has_lpdead_items && vacrel->do_index_vacuuming)
{
/*
* Wait until lazy_vacuum_heap_rel() to save free space. This
* doesn't just save us some cycles; it also allows us to record
* any additional free space that lazy_vacuum_heap_page() will
* make available in cases where it's possible to truncate the
* page's line pointer array.
*
* Note: It's not in fact 100% certain that we really will call
* lazy_vacuum_heap_rel() -- lazy_vacuum() might yet opt to skip
* index vacuuming (and so must skip heap vacuuming). This is
* deemed okay because it only happens in emergencies, or when
* there is very little free space anyway. (Besides, we start
* recording free space in the FSM once index vacuuming has been
* abandoned.)
*
* Note: The one-pass (no indexes) case is only supposed to make
* it this far when there were no LP_DEAD items during pruning.
*/
Assert(vacrel->nindexes > 0);
UnlockReleaseBuffer(buf);
}
else
{
Size freespace = PageGetHeapFreeSpace(page);
UnlockReleaseBuffer(buf);
RecordPageWithFreeSpace(vacrel->rel, blkno, freespace);
/*
* Periodically perform FSM vacuuming to make newly-freed space
* visible on upper FSM pages. This is done after vacuuming if the
* table has indexes.
*/
if (vacrel->nindexes == 0 && has_lpdead_items &&
blkno - next_fsm_block_to_vacuum >= VACUUM_FSM_EVERY_PAGES)
{
FreeSpaceMapVacuumRange(vacrel->rel, next_fsm_block_to_vacuum,
blkno);
next_fsm_block_to_vacuum = blkno;
}
}
else
UnlockReleaseBuffer(buf);
}
vacrel->blkno = InvalidBlockNumber;
@ -1364,23 +1546,13 @@ lazy_scan_new_or_empty(LVRelState *vacrel, Buffer buf, BlockNumber blkno,
* right after this operation completes instead of in the middle of it. Note that
* any tuple that becomes dead after the call to heap_page_prune() can't need to
* be frozen, because it was visible to another session when vacuum started.
*
* vmbuffer is the buffer containing the VM block with visibility information
* for the heap block, blkno. all_visible_according_to_vm is the saved
* visibility status of the heap block looked up earlier by the caller. We
* won't rely entirely on this status, as it may be out of date.
*
* *has_lpdead_items is set to true or false depending on whether, upon return
* from this function, any LP_DEAD items are still present on the page.
*/
static void
lazy_scan_prune(LVRelState *vacrel,
Buffer buf,
BlockNumber blkno,
Page page,
Buffer vmbuffer,
bool all_visible_according_to_vm,
bool *has_lpdead_items)
LVPagePruneState *prunestate)
{
Relation rel = vacrel->rel;
OffsetNumber offnum,
@ -1393,9 +1565,6 @@ lazy_scan_prune(LVRelState *vacrel,
recently_dead_tuples;
HeapPageFreeze pagefrz;
bool hastup = false;
bool all_visible,
all_frozen;
TransactionId visibility_cutoff_xid;
int64 fpi_before = pgWalUsage.wal_fpi;
OffsetNumber deadoffsets[MaxHeapTuplesPerPage];
HeapTupleFreeze frozen[MaxHeapTuplesPerPage];
@ -1427,31 +1596,18 @@ lazy_scan_prune(LVRelState *vacrel,
* in presult.ndeleted. It should not be confused with lpdead_items;
* lpdead_items's final value can be thought of as the number of tuples
* that were deleted from indexes.
*
* If the relation has no indexes, we can immediately mark would-be dead
* items LP_UNUSED, so mark_unused_now should be true if no indexes and
* false otherwise.
*/
heap_page_prune(rel, buf, vacrel->vistest, vacrel->nindexes == 0,
&presult, &vacrel->offnum);
heap_page_prune(rel, buf, vacrel->vistest, &presult, &vacrel->offnum);
/*
* We will update the VM after collecting LP_DEAD items and freezing
* tuples. Keep track of whether or not the page is all_visible and
* all_frozen and use this information to update the VM. all_visible
* implies 0 lpdead_items, but don't trust all_frozen result unless
* all_visible is also set to true.
*
* Also keep track of the visibility cutoff xid for recovery conflicts.
* Now scan the page to collect LP_DEAD items and check for tuples
* requiring freezing among remaining tuples with storage
*/
all_visible = true;
all_frozen = true;
visibility_cutoff_xid = InvalidTransactionId;
prunestate->has_lpdead_items = false;
prunestate->all_visible = true;
prunestate->all_frozen = true;
prunestate->visibility_cutoff_xid = InvalidTransactionId;
/*
* Now scan the page to collect LP_DEAD items and update the variables set
* just above.
*/
for (offnum = FirstOffsetNumber;
offnum <= maxoff;
offnum = OffsetNumberNext(offnum))
@ -1538,13 +1694,13 @@ lazy_scan_prune(LVRelState *vacrel,
* asynchronously. See SetHintBits for more info. Check that
* the tuple is hinted xmin-committed because of that.
*/
if (all_visible)
if (prunestate->all_visible)
{
TransactionId xmin;
if (!HeapTupleHeaderXminCommitted(htup))
{
all_visible = false;
prunestate->all_visible = false;
break;
}
@ -1556,14 +1712,14 @@ lazy_scan_prune(LVRelState *vacrel,
if (!TransactionIdPrecedes(xmin,
vacrel->cutoffs.OldestXmin))
{
all_visible = false;
prunestate->all_visible = false;
break;
}
/* Track newest xmin on page. */
if (TransactionIdFollows(xmin, visibility_cutoff_xid) &&
if (TransactionIdFollows(xmin, prunestate->visibility_cutoff_xid) &&
TransactionIdIsNormal(xmin))
visibility_cutoff_xid = xmin;
prunestate->visibility_cutoff_xid = xmin;
}
break;
case HEAPTUPLE_RECENTLY_DEAD:
@ -1574,7 +1730,7 @@ lazy_scan_prune(LVRelState *vacrel,
* pruning.)
*/
recently_dead_tuples++;
all_visible = false;
prunestate->all_visible = false;
break;
case HEAPTUPLE_INSERT_IN_PROGRESS:
@ -1585,11 +1741,11 @@ lazy_scan_prune(LVRelState *vacrel,
* results. This assumption is a bit shaky, but it is what
* acquire_sample_rows() does, so be consistent.
*/
all_visible = false;
prunestate->all_visible = false;
break;
case HEAPTUPLE_DELETE_IN_PROGRESS:
/* This is an expected case during concurrent vacuum */
all_visible = false;
prunestate->all_visible = false;
/*
* Count such rows as live. As above, we assume the deleting
@ -1619,7 +1775,7 @@ lazy_scan_prune(LVRelState *vacrel,
* definitely cannot be set all-frozen in the visibility map later on
*/
if (!totally_frozen)
all_frozen = false;
prunestate->all_frozen = false;
}
/*
@ -1637,7 +1793,7 @@ lazy_scan_prune(LVRelState *vacrel,
* page all-frozen afterwards (might not happen until final heap pass).
*/
if (pagefrz.freeze_required || tuples_frozen == 0 ||
(all_visible && all_frozen &&
(prunestate->all_visible && prunestate->all_frozen &&
fpi_before != pgWalUsage.wal_fpi))
{
/*
@ -1675,11 +1831,11 @@ lazy_scan_prune(LVRelState *vacrel,
* once we're done with it. Otherwise we generate a conservative
* cutoff by stepping back from OldestXmin.
*/
if (all_visible && all_frozen)
if (prunestate->all_visible && prunestate->all_frozen)
{
/* Using same cutoff when setting VM is now unnecessary */
snapshotConflictHorizon = visibility_cutoff_xid;
visibility_cutoff_xid = InvalidTransactionId;
snapshotConflictHorizon = prunestate->visibility_cutoff_xid;
prunestate->visibility_cutoff_xid = InvalidTransactionId;
}
else
{
@ -1702,7 +1858,7 @@ lazy_scan_prune(LVRelState *vacrel,
*/
vacrel->NewRelfrozenXid = pagefrz.NoFreezePageRelfrozenXid;
vacrel->NewRelminMxid = pagefrz.NoFreezePageRelminMxid;
all_frozen = false;
prunestate->all_frozen = false;
tuples_frozen = 0; /* avoid miscounts in instrumentation */
}
@ -1715,17 +1871,16 @@ lazy_scan_prune(LVRelState *vacrel,
*/
#ifdef USE_ASSERT_CHECKING
/* Note that all_frozen value does not matter when !all_visible */
if (all_visible && lpdead_items == 0)
if (prunestate->all_visible && lpdead_items == 0)
{
TransactionId debug_cutoff;
bool debug_all_frozen;
TransactionId cutoff;
bool all_frozen;
if (!heap_page_is_all_visible(vacrel, buf,
&debug_cutoff, &debug_all_frozen))
if (!heap_page_is_all_visible(vacrel, buf, &cutoff, &all_frozen))
Assert(false);
Assert(!TransactionIdIsValid(debug_cutoff) ||
debug_cutoff == visibility_cutoff_xid);
Assert(!TransactionIdIsValid(cutoff) ||
cutoff == prunestate->visibility_cutoff_xid);
}
#endif
@ -1738,6 +1893,7 @@ lazy_scan_prune(LVRelState *vacrel,
ItemPointerData tmp;
vacrel->lpdead_item_pages++;
prunestate->has_lpdead_items = true;
ItemPointerSetBlockNumber(&tmp, blkno);
@ -1762,7 +1918,7 @@ lazy_scan_prune(LVRelState *vacrel,
* Now that freezing has been finalized, unset all_visible. It needs
* to reflect the present state of things, as expected by our caller.
*/
all_visible = false;
prunestate->all_visible = false;
}
/* Finally, add page-local counts to whole-VACUUM counts */
@ -1775,118 +1931,6 @@ lazy_scan_prune(LVRelState *vacrel,
/* Can't truncate this page */
if (hastup)
vacrel->nonempty_pages = blkno + 1;
/* Did we find LP_DEAD items? */
*has_lpdead_items = (lpdead_items > 0);
Assert(!all_visible || !(*has_lpdead_items));
/*
* Handle setting visibility map bit based on information from the VM (as
* of last lazy_scan_skip() call), and from all_visible and all_frozen
* variables
*/
if (!all_visible_according_to_vm && all_visible)
{
uint8 flags = VISIBILITYMAP_ALL_VISIBLE;
if (all_frozen)
{
Assert(!TransactionIdIsValid(visibility_cutoff_xid));
flags |= VISIBILITYMAP_ALL_FROZEN;
}
/*
* It should never be the case that the visibility map page is set
* while the page-level bit is clear, but the reverse is allowed (if
* checksums are not enabled). Regardless, set both bits so that we
* get back in sync.
*
* NB: If the heap page is all-visible but the VM bit is not set, we
* don't need to dirty the heap page. However, if checksums are
* enabled, we do need to make sure that the heap page is dirtied
* before passing it to visibilitymap_set(), because it may be logged.
* Given that this situation should only happen in rare cases after a
* crash, it is not worth optimizing.
*/
PageSetAllVisible(page);
MarkBufferDirty(buf);
visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
vmbuffer, visibility_cutoff_xid,
flags);
}
/*
* As of PostgreSQL 9.2, the visibility map bit should never be set if the
* page-level bit is clear. However, it's possible that the bit got
* cleared after lazy_scan_skip() was called, so we must recheck with
* buffer lock before concluding that the VM is corrupt.
*/
else if (all_visible_according_to_vm && !PageIsAllVisible(page) &&
visibilitymap_get_status(vacrel->rel, blkno, &vmbuffer) != 0)
{
elog(WARNING, "page is not marked all-visible but visibility map bit is set in relation \"%s\" page %u",
vacrel->relname, blkno);
visibilitymap_clear(vacrel->rel, blkno, vmbuffer,
VISIBILITYMAP_VALID_BITS);
}
/*
* It's possible for the value returned by
* GetOldestNonRemovableTransactionId() to move backwards, so it's not
* wrong for us to see tuples that appear to not be visible to everyone
* yet, while PD_ALL_VISIBLE is already set. The real safe xmin value
* never moves backwards, but GetOldestNonRemovableTransactionId() is
* conservative and sometimes returns a value that's unnecessarily small,
* so if we see that contradiction it just means that the tuples that we
* think are not visible to everyone yet actually are, and the
* PD_ALL_VISIBLE flag is correct.
*
* There should never be LP_DEAD items on a page with PD_ALL_VISIBLE set,
* however.
*/
else if (lpdead_items > 0 && PageIsAllVisible(page))
{
elog(WARNING, "page containing LP_DEAD items is marked as all-visible in relation \"%s\" page %u",
vacrel->relname, blkno);
PageClearAllVisible(page);
MarkBufferDirty(buf);
visibilitymap_clear(vacrel->rel, blkno, vmbuffer,
VISIBILITYMAP_VALID_BITS);
}
/*
* If the all-visible page is all-frozen but not marked as such yet, mark
* it as all-frozen. Note that all_frozen is only valid if all_visible is
* true, so we must check both all_visible and all_frozen.
*/
else if (all_visible_according_to_vm && all_visible &&
all_frozen && !VM_ALL_FROZEN(vacrel->rel, blkno, &vmbuffer))
{
/*
* Avoid relying on all_visible_according_to_vm as a proxy for the
* page-level PD_ALL_VISIBLE bit being set, since it might have become
* stale -- even when all_visible is set
*/
if (!PageIsAllVisible(page))
{
PageSetAllVisible(page);
MarkBufferDirty(buf);
}
/*
* Set the page all-frozen (and all-visible) in the VM.
*
* We can pass InvalidTransactionId as our visibility_cutoff_xid,
* since a snapshotConflictHorizon sufficient to make everything safe
* for REDO was logged when the page's tuples were frozen.
*/
Assert(!TransactionIdIsValid(visibility_cutoff_xid));
visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,
vmbuffer, InvalidTransactionId,
VISIBILITYMAP_ALL_VISIBLE |
VISIBILITYMAP_ALL_FROZEN);
}
}
/*
@ -2476,7 +2520,7 @@ lazy_vacuum_heap_page(LVRelState *vacrel, BlockNumber blkno, Buffer buffer,
bool all_frozen;
LVSavedErrInfo saved_err_info;
Assert(vacrel->do_index_vacuuming);
Assert(vacrel->nindexes == 0 || vacrel->do_index_vacuuming);
pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_VACUUMED, blkno);

View File

@ -107,7 +107,6 @@ do { \
static IndexScanDesc index_beginscan_internal(Relation indexRelation,
int nkeys, int norderbys, Snapshot snapshot,
ParallelIndexScanDesc pscan, bool temp_snap);
static inline void validate_relation_kind(Relation r);
/* ----------------------------------------------------------------
@ -136,30 +135,12 @@ index_open(Oid relationId, LOCKMODE lockmode)
r = relation_open(relationId, lockmode);
validate_relation_kind(r);
return r;
}
/* ----------------
* try_index_open - open a index relation by relation OID
*
* Same as index_open, except return NULL instead of failing
* if the relation does not exist.
* ----------------
*/
Relation
try_index_open(Oid relationId, LOCKMODE lockmode)
{
Relation r;
r = try_relation_open(relationId, lockmode);
/* leave if index does not exist */
if (!r)
return NULL;
validate_relation_kind(r);
if (r->rd_rel->relkind != RELKIND_INDEX &&
r->rd_rel->relkind != RELKIND_PARTITIONED_INDEX)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("\"%s\" is not an index",
RelationGetRelationName(r))));
return r;
}
@ -187,24 +168,6 @@ index_close(Relation relation, LOCKMODE lockmode)
UnlockRelationId(&relid, lockmode);
}
/* ----------------
* validate_relation_kind - check the relation's kind
*
* Make sure relkind is an index or a partitioned index.
* ----------------
*/
static inline void
validate_relation_kind(Relation r)
{
if (r->rd_rel->relkind != RELKIND_INDEX &&
r->rd_rel->relkind != RELKIND_PARTITIONED_INDEX)
ereport(ERROR,
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
errmsg("\"%s\" is not an index",
RelationGetRelationName(r))));
}
/* ----------------
* index_insert - insert an index tuple into a relation
* ----------------

View File

@ -3634,24 +3634,7 @@ reindex_index(const ReindexStmt *stmt, Oid indexId,
* Open the target index relation and get an exclusive lock on it, to
* ensure that no one else is touching this particular index.
*/
if ((params->options & REINDEXOPT_MISSING_OK) != 0)
iRel = try_index_open(indexId, AccessExclusiveLock);
else
iRel = index_open(indexId, AccessExclusiveLock);
/* if index relation is gone, leave */
if (!iRel)
{
/* Roll back any GUC changes */
AtEOXact_GUC(false, save_nestlevel);
/* Restore userid and security context */
SetUserIdAndSecContext(save_userid, save_sec_context);
/* Close parent heap relation, but keep locks */
table_close(heapRelation, NoLock);
return;
}
iRel = index_open(indexId, AccessExclusiveLock);
if (progress)
pgstat_progress_update_param(PROGRESS_CREATEIDX_ACCESS_METHOD_OID,

View File

@ -174,7 +174,7 @@ CreateEventTrigger(CreateEventTrigStmt *stmt)
else if (strcmp(stmt->eventname, "login") == 0 && tags != NULL)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("tag filtering is not supported for login event trigger")));
errmsg("Tag filtering is not supported for login event trigger")));
/*
* Give user a nice error message if an event trigger of the same name

View File

@ -868,7 +868,7 @@ AlterRole(ParseState *pstate, AlterRoleStmt *stmt)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("permission denied to alter role"),
errdetail("The bootstrap superuser must have the %s attribute.",
errdetail("The bootstrap user must have the %s attribute.",
"SUPERUSER")));
new_record[Anum_pg_authid_rolsuper - 1] = BoolGetDatum(should_be_super);

View File

@ -89,7 +89,10 @@ DiscreteKnapsack(int max_weight, int num_items,
{
/* copy sets[ow] to sets[j] without realloc */
if (j != ow)
sets[j] = bms_replace_members(sets[j], sets[ow]);
{
sets[j] = bms_del_members(sets[j], sets[j]);
sets[j] = bms_add_members(sets[j], sets[ow]);
}
sets[j] = bms_add_member(sets[j], i);

View File

@ -976,50 +976,6 @@ bms_add_members(Bitmapset *a, const Bitmapset *b)
return result;
}
/*
* bms_replace_members
* Remove all existing members from 'a' and repopulate the set with members
* from 'b', recycling 'a', when possible.
*/
Bitmapset *
bms_replace_members(Bitmapset *a, const Bitmapset *b)
{
int i;
Assert(bms_is_valid_set(a));
Assert(bms_is_valid_set(b));
if (a == NULL)
return bms_copy(b);
if (b == NULL)
{
pfree(a);
return NULL;
}
if (a->nwords < b->nwords)
a = (Bitmapset *) repalloc(a, BITMAPSET_SIZE(b->nwords));
i = 0;
do
{
a->words[i] = b->words[i];
} while (++i < b->nwords);
a->nwords = b->nwords;
#ifdef REALLOCATE_BITMAPSETS
/*
* There's no guarantee that the repalloc returned a new pointer, so copy
* and free unconditionally here.
*/
a = bms_copy_and_free(a);
#endif
return a;
}
/*
* bms_add_range
* Add members in the range of 'lower' to 'upper' to the set.

View File

@ -955,7 +955,7 @@ XLogWalRcvWrite(char *buf, Size nbytes, XLogRecPtr recptr, TimeLineID tli)
ereport(PANIC,
(errcode_for_file_access(),
errmsg("could not write to WAL segment %s "
"at offset %d, length %lu: %m",
"at offset %u, length %lu: %m",
xlogfname, startoff, (unsigned long) segbytes)));
}

View File

@ -3961,7 +3961,7 @@ check_debug_io_direct(char **newval, void **extra, GucSource source)
if (!SplitGUCList(rawstring, ',', &elemlist))
{
GUC_check_errdetail("Invalid list syntax in parameter %s",
GUC_check_errdetail("invalid list syntax in parameter %s",
"debug_io_direct");
pfree(rawstring);
list_free(elemlist);
@ -3981,7 +3981,7 @@ check_debug_io_direct(char **newval, void **extra, GucSource source)
flags |= IO_DIRECT_WAL_INIT;
else
{
GUC_check_errdetail("Invalid option \"%s\"", item);
GUC_check_errdetail("invalid option \"%s\"", item);
result = false;
break;
}

View File

@ -2479,7 +2479,7 @@ errdetail_params(ParamListInfo params)
str = BuildParamLogString(params, NULL, log_parameter_max_length);
if (str && str[0] != '\0')
errdetail("Parameters: %s", str);
errdetail("parameters: %s", str);
}
return 0;
@ -2494,7 +2494,7 @@ static int
errdetail_abort(void)
{
if (MyProc->recoveryConflictPending)
errdetail("Abort reason: recovery conflict");
errdetail("abort reason: recovery conflict");
return 0;
}

View File

@ -2140,7 +2140,7 @@ check_backtrace_functions(char **newval, void **extra, GucSource source)
", \n\t");
if (validlen != newvallen)
{
GUC_check_errdetail("Invalid character");
GUC_check_errdetail("invalid character");
return false;
}

View File

@ -290,7 +290,7 @@ select column1::jsonb from (values (:value), (:long)) as q;
my $log = PostgreSQL::Test::Utils::slurp_file($node->logfile);
unlike(
$log,
qr[DETAIL: Parameters: \$1 = '\{ invalid ',],
qr[DETAIL: parameters: \$1 = '\{ invalid ',],
"no parameters logged");
$log = undef;
@ -331,7 +331,7 @@ select column1::jsonb from (values (:value), (:long)) as q;
$log = PostgreSQL::Test::Utils::slurp_file($node->logfile);
like(
$log,
qr[DETAIL: Parameters: \$1 = '\{ invalid ', \$2 = '''Valame Dios!'' dijo Sancho; ''no le dije yo a vuestra merced que mirase bien lo que hacia\?'''],
qr[DETAIL: parameters: \$1 = '\{ invalid ', \$2 = '''Valame Dios!'' dijo Sancho; ''no le dije yo a vuestra merced que mirase bien lo que hacia\?'''],
"parameter report does not truncate");
$log = undef;
@ -376,7 +376,7 @@ select column1::jsonb from (values (:value), (:long)) as q;
$log = PostgreSQL::Test::Utils::slurp_file($node->logfile);
like(
$log,
qr[DETAIL: Parameters: \$1 = '\{ inval\.\.\.', \$2 = '''Valame\.\.\.'],
qr[DETAIL: parameters: \$1 = '\{ inval\.\.\.', \$2 = '''Valame\.\.\.'],
"parameter report truncates");
$log = undef;

View File

@ -139,7 +139,6 @@ typedef struct IndexOrderByDistance
#define IndexScanIsValid(scan) PointerIsValid(scan)
extern Relation index_open(Oid relationId, LOCKMODE lockmode);
extern Relation try_index_open(Oid relationId, LOCKMODE lockmode);
extern void index_close(Relation relation, LOCKMODE lockmode);
extern bool index_insert(Relation indexRelation,

View File

@ -320,7 +320,6 @@ struct GlobalVisState;
extern void heap_page_prune_opt(Relation relation, Buffer buffer);
extern void heap_page_prune(Relation relation, Buffer buffer,
struct GlobalVisState *vistest,
bool mark_unused_now,
PruneResult *presult,
OffsetNumber *off_loc);
extern void heap_page_prune_execute(Buffer buffer,

View File

@ -109,7 +109,6 @@ extern BMS_Membership bms_membership(const Bitmapset *a);
extern Bitmapset *bms_add_member(Bitmapset *a, int x);
extern Bitmapset *bms_del_member(Bitmapset *a, int x);
extern Bitmapset *bms_add_members(Bitmapset *a, const Bitmapset *b);
extern Bitmapset *bms_replace_members(Bitmapset *a, const Bitmapset *b);
extern Bitmapset *bms_add_range(Bitmapset *a, int lower, int upper);
extern Bitmapset *bms_int_members(Bitmapset *a, const Bitmapset *b);
extern Bitmapset *bms_del_members(Bitmapset *a, const Bitmapset *b);

View File

@ -32,9 +32,8 @@ DATA = plpgsql.control plpgsql--1.0.sql
REGRESS_OPTS = --dbname=$(PL_TESTDB)
REGRESS = plpgsql_array plpgsql_cache plpgsql_call plpgsql_control \
plpgsql_copy plpgsql_domain plpgsql_misc \
plpgsql_record plpgsql_simple plpgsql_transaction \
REGRESS = plpgsql_array plpgsql_call plpgsql_control plpgsql_copy plpgsql_domain \
plpgsql_record plpgsql_cache plpgsql_simple plpgsql_transaction \
plpgsql_trap plpgsql_trigger plpgsql_varprops
# where to find gen_keywordlist.pl and subsidiary files

View File

@ -1,31 +0,0 @@
--
-- Miscellaneous topics
--
-- Verify that we can parse new-style CREATE FUNCTION/PROCEDURE
do
$$
declare procedure int; -- check we still recognize non-keywords as vars
begin
create function test1() returns int
begin atomic
select 2 + 2;
end;
create or replace procedure test2(x int)
begin atomic
select x + 2;
end;
end
$$;
\sf test1
CREATE OR REPLACE FUNCTION public.test1()
RETURNS integer
LANGUAGE sql
BEGIN ATOMIC
SELECT (2 + 2);
END
\sf test2
CREATE OR REPLACE PROCEDURE public.test2(IN x integer)
LANGUAGE sql
BEGIN ATOMIC
SELECT (x + 2);
END

View File

@ -76,13 +76,12 @@ tests += {
'regress': {
'sql': [
'plpgsql_array',
'plpgsql_cache',
'plpgsql_call',
'plpgsql_control',
'plpgsql_copy',
'plpgsql_domain',
'plpgsql_misc',
'plpgsql_record',
'plpgsql_cache',
'plpgsql_simple',
'plpgsql_transaction',
'plpgsql_trap',

View File

@ -76,8 +76,7 @@ static PLpgSQL_expr *read_sql_expression2(int until, int until2,
int *endtoken);
static PLpgSQL_expr *read_sql_stmt(void);
static PLpgSQL_type *read_datatype(int tok);
static PLpgSQL_stmt *make_execsql_stmt(int firsttoken, int location,
PLword *word);
static PLpgSQL_stmt *make_execsql_stmt(int firsttoken, int location);
static PLpgSQL_stmt_fetch *read_fetch_direction(void);
static void complete_direction(PLpgSQL_stmt_fetch *fetch,
bool *check_FROM);
@ -1973,15 +1972,15 @@ loop_body : proc_sect K_END K_LOOP opt_label ';'
*/
stmt_execsql : K_IMPORT
{
$$ = make_execsql_stmt(K_IMPORT, @1, NULL);
$$ = make_execsql_stmt(K_IMPORT, @1);
}
| K_INSERT
{
$$ = make_execsql_stmt(K_INSERT, @1, NULL);
$$ = make_execsql_stmt(K_INSERT, @1);
}
| K_MERGE
{
$$ = make_execsql_stmt(K_MERGE, @1, NULL);
$$ = make_execsql_stmt(K_MERGE, @1);
}
| T_WORD
{
@ -1992,7 +1991,7 @@ stmt_execsql : K_IMPORT
if (tok == '=' || tok == COLON_EQUALS ||
tok == '[' || tok == '.')
word_is_not_variable(&($1), @1);
$$ = make_execsql_stmt(T_WORD, @1, &($1));
$$ = make_execsql_stmt(T_WORD, @1);
}
| T_CWORD
{
@ -2003,7 +2002,7 @@ stmt_execsql : K_IMPORT
if (tok == '=' || tok == COLON_EQUALS ||
tok == '[' || tok == '.')
cword_is_not_variable(&($1), @1);
$$ = make_execsql_stmt(T_CWORD, @1, NULL);
$$ = make_execsql_stmt(T_CWORD, @1);
}
;
@ -2948,13 +2947,8 @@ read_datatype(int tok)
return result;
}
/*
* Read a generic SQL statement. We have already read its first token;
* firsttoken is that token's code and location its starting location.
* If firsttoken == T_WORD, pass its yylval value as "word", else pass NULL.
*/
static PLpgSQL_stmt *
make_execsql_stmt(int firsttoken, int location, PLword *word)
make_execsql_stmt(int firsttoken, int location)
{
StringInfoData ds;
IdentifierLookup save_IdentifierLookup;
@ -2967,16 +2961,9 @@ make_execsql_stmt(int firsttoken, int location, PLword *word)
bool have_strict = false;
int into_start_loc = -1;
int into_end_loc = -1;
int paren_depth = 0;
int begin_depth = 0;
bool in_routine_definition = false;
int token_count = 0;
char tokens[4]; /* records the first few tokens */
initStringInfo(&ds);
memset(tokens, 0, sizeof(tokens));
/* special lookup mode for identifiers within the SQL text */
save_IdentifierLookup = plpgsql_IdentifierLookup;
plpgsql_IdentifierLookup = IDENTIFIER_LOOKUP_EXPR;
@ -2985,12 +2972,6 @@ make_execsql_stmt(int firsttoken, int location, PLword *word)
* Scan to the end of the SQL command. Identify any INTO-variables
* clause lurking within it, and parse that via read_into_target().
*
* The end of the statement is defined by a semicolon ... except that
* semicolons within parentheses or BEGIN/END blocks don't terminate a
* statement. We follow psql's lead in not recognizing BEGIN/END except
* after CREATE [OR REPLACE] {FUNCTION|PROCEDURE}. END can also appear
* within a CASE construct, so we treat CASE/END like BEGIN/END.
*
* Because INTO is sometimes used in the main SQL grammar, we have to be
* careful not to take any such usage of INTO as a PL/pgSQL INTO clause.
* There are currently three such cases:
@ -3016,50 +2997,13 @@ make_execsql_stmt(int firsttoken, int location, PLword *word)
* break this logic again ... beware!
*/
tok = firsttoken;
if (tok == T_WORD && strcmp(word->ident, "create") == 0)
tokens[token_count] = 'c';
token_count++;
for (;;)
{
prev_tok = tok;
tok = yylex();
if (have_into && into_end_loc < 0)
into_end_loc = yylloc; /* token after the INTO part */
/* Detect CREATE [OR REPLACE] {FUNCTION|PROCEDURE} */
if (tokens[0] == 'c' && token_count < sizeof(tokens))
{
if (tok == K_OR)
tokens[token_count] = 'o';
else if (tok == T_WORD &&
strcmp(yylval.word.ident, "replace") == 0)
tokens[token_count] = 'r';
else if (tok == T_WORD &&
strcmp(yylval.word.ident, "function") == 0)
tokens[token_count] = 'f';
else if (tok == T_WORD &&
strcmp(yylval.word.ident, "procedure") == 0)
tokens[token_count] = 'f'; /* treat same as "function" */
if (tokens[1] == 'f' ||
(tokens[1] == 'o' && tokens[2] == 'r' && tokens[3] == 'f'))
in_routine_definition = true;
token_count++;
}
/* Track paren nesting (needed for CREATE RULE syntax) */
if (tok == '(')
paren_depth++;
else if (tok == ')' && paren_depth > 0)
paren_depth--;
/* We need track BEGIN/END nesting only in a routine definition */
if (in_routine_definition && paren_depth == 0)
{
if (tok == K_BEGIN || tok == K_CASE)
begin_depth++;
else if (tok == K_END && begin_depth > 0)
begin_depth--;
}
/* Command-ending semicolon? */
if (tok == ';' && paren_depth == 0 && begin_depth == 0)
if (tok == ';')
break;
if (tok == 0)
yyerror("unexpected end of function definition");

View File

@ -1,22 +0,0 @@
--
-- Miscellaneous topics
--
-- Verify that we can parse new-style CREATE FUNCTION/PROCEDURE
do
$$
declare procedure int; -- check we still recognize non-keywords as vars
begin
create function test1() returns int
begin atomic
select 2 + 2;
end;
create or replace procedure test2(x int)
begin atomic
select x + 2;
end;
end
$$;
\sf test1
\sf test2

View File

@ -2012,15 +2012,12 @@ sub psql
# We don't use IPC::Run::Simple to limit dependencies.
#
# We always die on signal.
if (defined $ret)
{
my $core = $ret & 128 ? " (core dumped)" : "";
die "psql exited with signal "
. ($ret & 127)
. "$core: '$$stderr' while running '@psql_params'"
if $ret & 127;
$ret = $ret >> 8;
}
my $core = $ret & 128 ? " (core dumped)" : "";
die "psql exited with signal "
. ($ret & 127)
. "$core: '$$stderr' while running '@psql_params'"
if $ret & 127;
$ret = $ret >> 8;
if ($ret && $params{on_error_die})
{

View File

@ -5,7 +5,6 @@
use strict;
use warnings FATAL => 'all';
use PostgreSQL::Test::Cluster;
use PostgreSQL::Test::Utils;
use Test::More;
my ($node_publisher, $node_subscriber, $publisher_connstr, $result, $offset);
@ -331,91 +330,81 @@ $node_subscriber->wait_for_log(
# If the subscription connection requires a password ('password_required'
# is true) then a non-superuser must specify that password in the connection
# string.
SKIP:
$ENV{"PGPASSWORD"} = 'secret';
my $node_publisher1 = PostgreSQL::Test::Cluster->new('publisher1');
my $node_subscriber1 = PostgreSQL::Test::Cluster->new('subscriber1');
$node_publisher1->init(allows_streaming => 'logical');
$node_subscriber1->init;
$node_publisher1->start;
$node_subscriber1->start;
my $publisher_connstr1 =
$node_publisher1->connstr . ' user=regress_test_user dbname=postgres';
my $publisher_connstr2 =
$node_publisher1->connstr
. ' user=regress_test_user dbname=postgres password=secret';
for my $node ($node_publisher1, $node_subscriber1)
{
skip
"subscription password_required test cannot run without Unix-domain sockets",
3
unless $use_unix_sockets;
my $node_publisher1 = PostgreSQL::Test::Cluster->new('publisher1');
my $node_subscriber1 = PostgreSQL::Test::Cluster->new('subscriber1');
$node_publisher1->init(allows_streaming => 'logical');
$node_subscriber1->init;
$node_publisher1->start;
$node_subscriber1->start;
my $publisher_connstr1 =
$node_publisher1->connstr . ' user=regress_test_user dbname=postgres';
my $publisher_connstr2 =
$node_publisher1->connstr
. ' user=regress_test_user dbname=postgres password=secret';
for my $node ($node_publisher1, $node_subscriber1)
{
$node->safe_psql(
'postgres', qq(
CREATE ROLE regress_test_user PASSWORD 'secret' LOGIN REPLICATION;
GRANT CREATE ON DATABASE postgres TO regress_test_user;
GRANT PG_CREATE_SUBSCRIPTION TO regress_test_user;
));
}
$node_publisher1->safe_psql(
$node->safe_psql(
'postgres', qq(
SET SESSION AUTHORIZATION regress_test_user;
CREATE PUBLICATION regress_test_pub;
));
$node_subscriber1->safe_psql(
'postgres', qq(
CREATE SUBSCRIPTION regress_test_sub CONNECTION '$publisher_connstr1' PUBLICATION regress_test_pub;
));
# Wait for initial sync to finish
$node_subscriber1->wait_for_subscription_sync($node_publisher1,
'regress_test_sub');
my $save_pgpassword = $ENV{"PGPASSWORD"};
$ENV{"PGPASSWORD"} = 'secret';
# Setup pg_hba configuration so that logical replication connection without
# password is not allowed.
unlink($node_publisher1->data_dir . '/pg_hba.conf');
$node_publisher1->append_conf('pg_hba.conf',
qq{local all regress_test_user md5});
$node_publisher1->reload;
# Change the subscription owner to a non-superuser
$node_subscriber1->safe_psql(
'postgres', qq(
ALTER SUBSCRIPTION regress_test_sub OWNER TO regress_test_user;
));
# Non-superuser must specify password in the connection string
my ($ret, $stdout, $stderr) = $node_subscriber1->psql(
'postgres', qq(
SET SESSION AUTHORIZATION regress_test_user;
ALTER SUBSCRIPTION regress_test_sub REFRESH PUBLICATION;
));
isnt($ret, 0,
"non zero exit for subscription whose owner is a non-superuser must specify password parameter of the connection string"
);
ok( $stderr =~
m/DETAIL: Non-superusers must provide a password in the connection string./,
'subscription whose owner is a non-superuser must specify password parameter of the connection string'
);
$ENV{"PGPASSWORD"} = $save_pgpassword;
# It should succeed after including the password parameter of the connection
# string.
($ret, $stdout, $stderr) = $node_subscriber1->psql(
'postgres', qq(
SET SESSION AUTHORIZATION regress_test_user;
ALTER SUBSCRIPTION regress_test_sub CONNECTION '$publisher_connstr2';
ALTER SUBSCRIPTION regress_test_sub REFRESH PUBLICATION;
));
is($ret, 0,
"Non-superuser will be able to refresh the publication after specifying the password parameter of the connection string"
);
CREATE ROLE regress_test_user PASSWORD 'secret' LOGIN REPLICATION;
GRANT CREATE ON DATABASE postgres TO regress_test_user;
GRANT PG_CREATE_SUBSCRIPTION TO regress_test_user;
));
}
$node_publisher1->safe_psql(
'postgres', qq(
SET SESSION AUTHORIZATION regress_test_user;
CREATE PUBLICATION regress_test_pub;
));
$node_subscriber1->safe_psql(
'postgres', qq(
CREATE SUBSCRIPTION regress_test_sub CONNECTION '$publisher_connstr1' PUBLICATION regress_test_pub;
));
# Wait for initial sync to finish
$node_subscriber1->wait_for_subscription_sync($node_publisher1,
'regress_test_sub');
# Setup pg_hba configuration so that logical replication connection without
# password is not allowed.
unlink($node_publisher1->data_dir . '/pg_hba.conf');
$node_publisher1->append_conf('pg_hba.conf',
qq{local all regress_test_user md5});
$node_publisher1->reload;
# Change the subscription owner to a non-superuser
$node_subscriber1->safe_psql(
'postgres', qq(
ALTER SUBSCRIPTION regress_test_sub OWNER TO regress_test_user;
));
# Non-superuser must specify password in the connection string
my ($ret, $stdout, $stderr) = $node_subscriber1->psql(
'postgres', qq(
SET SESSION AUTHORIZATION regress_test_user;
ALTER SUBSCRIPTION regress_test_sub REFRESH PUBLICATION;
));
isnt($ret, 0,
"non zero exit for subscription whose owner is a non-superuser must specify password parameter of the connection string"
);
ok( $stderr =~ m/DETAIL: Non-superusers must provide a password in the connection string./,
'subscription whose owner is a non-superuser must specify password parameter of the connection string'
);
delete $ENV{"PGPASSWORD"};
# It should succeed after including the password parameter of the connection
# string.
($ret, $stdout, $stderr) = $node_subscriber1->psql(
'postgres', qq(
SET SESSION AUTHORIZATION regress_test_user;
ALTER SUBSCRIPTION regress_test_sub CONNECTION '$publisher_connstr2';
ALTER SUBSCRIPTION regress_test_sub REFRESH PUBLICATION;
));
is($ret, 0,
"Non-superuser will be able to refresh the publication after specifying the password parameter of the connection string"
);
done_testing();

View File

@ -1405,6 +1405,7 @@ LPVOID
LPWSTR
LSEG
LUID
LVPagePruneState
LVRelState
LVSavedErrInfo
LWLock