mirror of
https://github.com/postgres/postgres.git
synced 2025-10-09 00:05:07 -04:00
bufmgr: Use consistent naming of the clock-sweep algorithm
Minor edits to comments only. Author: Greg Burd <greg@burd.me> Reviewed-by: Tomas Vondra <tomas@vondra.me> Reviewed-by: Andres Freund <andres@anarazel.de> Discussion: https://postgr.es/m/70C6A5B5-2A20-4D0B-BC73-EB09DD62D61C@getmailspring.com
This commit is contained in:
parent
e3d5ddb7ca
commit
50e4c6ace5
@ -211,9 +211,9 @@ Buffer Ring Replacement Strategy
|
|||||||
When running a query that needs to access a large number of pages just once,
|
When running a query that needs to access a large number of pages just once,
|
||||||
such as VACUUM or a large sequential scan, a different strategy is used.
|
such as VACUUM or a large sequential scan, a different strategy is used.
|
||||||
A page that has been touched only by such a scan is unlikely to be needed
|
A page that has been touched only by such a scan is unlikely to be needed
|
||||||
again soon, so instead of running the normal clock sweep algorithm and
|
again soon, so instead of running the normal clock-sweep algorithm and
|
||||||
blowing out the entire buffer cache, a small ring of buffers is allocated
|
blowing out the entire buffer cache, a small ring of buffers is allocated
|
||||||
using the normal clock sweep algorithm and those buffers are reused for the
|
using the normal clock-sweep algorithm and those buffers are reused for the
|
||||||
whole scan. This also implies that much of the write traffic caused by such
|
whole scan. This also implies that much of the write traffic caused by such
|
||||||
a statement will be done by the backend itself and not pushed off onto other
|
a statement will be done by the backend itself and not pushed off onto other
|
||||||
processes.
|
processes.
|
||||||
|
@ -3608,7 +3608,7 @@ BufferSync(int flags)
|
|||||||
* This is called periodically by the background writer process.
|
* This is called periodically by the background writer process.
|
||||||
*
|
*
|
||||||
* Returns true if it's appropriate for the bgwriter process to go into
|
* Returns true if it's appropriate for the bgwriter process to go into
|
||||||
* low-power hibernation mode. (This happens if the strategy clock sweep
|
* low-power hibernation mode. (This happens if the strategy clock-sweep
|
||||||
* has been "lapped" and no buffer allocations have occurred recently,
|
* has been "lapped" and no buffer allocations have occurred recently,
|
||||||
* or if the bgwriter has been effectively disabled by setting
|
* or if the bgwriter has been effectively disabled by setting
|
||||||
* bgwriter_lru_maxpages to 0.)
|
* bgwriter_lru_maxpages to 0.)
|
||||||
@ -3658,7 +3658,7 @@ BgBufferSync(WritebackContext *wb_context)
|
|||||||
uint32 new_recent_alloc;
|
uint32 new_recent_alloc;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Find out where the freelist clock sweep currently is, and how many
|
* Find out where the freelist clock-sweep currently is, and how many
|
||||||
* buffer allocations have happened since our last call.
|
* buffer allocations have happened since our last call.
|
||||||
*/
|
*/
|
||||||
strategy_buf_id = StrategySyncStart(&strategy_passes, &recent_alloc);
|
strategy_buf_id = StrategySyncStart(&strategy_passes, &recent_alloc);
|
||||||
@ -3679,8 +3679,8 @@ BgBufferSync(WritebackContext *wb_context)
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
* Compute strategy_delta = how many buffers have been scanned by the
|
* Compute strategy_delta = how many buffers have been scanned by the
|
||||||
* clock sweep since last time. If first time through, assume none. Then
|
* clock-sweep since last time. If first time through, assume none. Then
|
||||||
* see if we are still ahead of the clock sweep, and if so, how many
|
* see if we are still ahead of the clock-sweep, and if so, how many
|
||||||
* buffers we could scan before we'd catch up with it and "lap" it. Note:
|
* buffers we could scan before we'd catch up with it and "lap" it. Note:
|
||||||
* weird-looking coding of xxx_passes comparisons are to avoid bogus
|
* weird-looking coding of xxx_passes comparisons are to avoid bogus
|
||||||
* behavior when the passes counts wrap around.
|
* behavior when the passes counts wrap around.
|
||||||
|
@ -33,7 +33,7 @@ typedef struct
|
|||||||
slock_t buffer_strategy_lock;
|
slock_t buffer_strategy_lock;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Clock sweep hand: index of next buffer to consider grabbing. Note that
|
* clock-sweep hand: index of next buffer to consider grabbing. Note that
|
||||||
* this isn't a concrete buffer - we only ever increase the value. So, to
|
* this isn't a concrete buffer - we only ever increase the value. So, to
|
||||||
* get an actual buffer, it needs to be used modulo NBuffers.
|
* get an actual buffer, it needs to be used modulo NBuffers.
|
||||||
*/
|
*/
|
||||||
@ -51,7 +51,7 @@ typedef struct
|
|||||||
* Statistics. These counters should be wide enough that they can't
|
* Statistics. These counters should be wide enough that they can't
|
||||||
* overflow during a single bgwriter cycle.
|
* overflow during a single bgwriter cycle.
|
||||||
*/
|
*/
|
||||||
uint32 completePasses; /* Complete cycles of the clock sweep */
|
uint32 completePasses; /* Complete cycles of the clock-sweep */
|
||||||
pg_atomic_uint32 numBufferAllocs; /* Buffers allocated since last reset */
|
pg_atomic_uint32 numBufferAllocs; /* Buffers allocated since last reset */
|
||||||
|
|
||||||
/*
|
/*
|
||||||
@ -311,7 +311,7 @@ StrategyGetBuffer(BufferAccessStrategy strategy, uint32 *buf_state, bool *from_r
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Nothing on the freelist, so run the "clock sweep" algorithm */
|
/* Nothing on the freelist, so run the "clock-sweep" algorithm */
|
||||||
trycounter = NBuffers;
|
trycounter = NBuffers;
|
||||||
for (;;)
|
for (;;)
|
||||||
{
|
{
|
||||||
@ -511,7 +511,7 @@ StrategyInitialize(bool init)
|
|||||||
StrategyControl->firstFreeBuffer = 0;
|
StrategyControl->firstFreeBuffer = 0;
|
||||||
StrategyControl->lastFreeBuffer = NBuffers - 1;
|
StrategyControl->lastFreeBuffer = NBuffers - 1;
|
||||||
|
|
||||||
/* Initialize the clock sweep pointer */
|
/* Initialize the clock-sweep pointer */
|
||||||
pg_atomic_init_u32(&StrategyControl->nextVictimBuffer, 0);
|
pg_atomic_init_u32(&StrategyControl->nextVictimBuffer, 0);
|
||||||
|
|
||||||
/* Clear statistics */
|
/* Clear statistics */
|
||||||
@ -759,7 +759,7 @@ GetBufferFromRing(BufferAccessStrategy strategy, uint32 *buf_state)
|
|||||||
*
|
*
|
||||||
* If usage_count is 0 or 1 then the buffer is fair game (we expect 1,
|
* If usage_count is 0 or 1 then the buffer is fair game (we expect 1,
|
||||||
* since our own previous usage of the ring element would have left it
|
* since our own previous usage of the ring element would have left it
|
||||||
* there, but it might've been decremented by clock sweep since then). A
|
* there, but it might've been decremented by clock-sweep since then). A
|
||||||
* higher usage_count indicates someone else has touched the buffer, so we
|
* higher usage_count indicates someone else has touched the buffer, so we
|
||||||
* shouldn't re-use it.
|
* shouldn't re-use it.
|
||||||
*/
|
*/
|
||||||
|
@ -229,7 +229,7 @@ GetLocalVictimBuffer(void)
|
|||||||
ResourceOwnerEnlarge(CurrentResourceOwner);
|
ResourceOwnerEnlarge(CurrentResourceOwner);
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Need to get a new buffer. We use a clock sweep algorithm (essentially
|
* Need to get a new buffer. We use a clock-sweep algorithm (essentially
|
||||||
* the same as what freelist.c does now...)
|
* the same as what freelist.c does now...)
|
||||||
*/
|
*/
|
||||||
trycounter = NLocBuffer;
|
trycounter = NLocBuffer;
|
||||||
|
@ -80,8 +80,8 @@ StaticAssertDecl(BUF_REFCOUNT_BITS + BUF_USAGECOUNT_BITS + BUF_FLAG_BITS == 32,
|
|||||||
* The maximum allowed value of usage_count represents a tradeoff between
|
* The maximum allowed value of usage_count represents a tradeoff between
|
||||||
* accuracy and speed of the clock-sweep buffer management algorithm. A
|
* accuracy and speed of the clock-sweep buffer management algorithm. A
|
||||||
* large value (comparable to NBuffers) would approximate LRU semantics.
|
* large value (comparable to NBuffers) would approximate LRU semantics.
|
||||||
* But it can take as many as BM_MAX_USAGE_COUNT+1 complete cycles of
|
* But it can take as many as BM_MAX_USAGE_COUNT+1 complete cycles of the
|
||||||
* clock sweeps to find a free buffer, so in practice we don't want the
|
* clock-sweep hand to find a free buffer, so in practice we don't want the
|
||||||
* value to be very large.
|
* value to be very large.
|
||||||
*/
|
*/
|
||||||
#define BM_MAX_USAGE_COUNT 5
|
#define BM_MAX_USAGE_COUNT 5
|
||||||
|
Loading…
x
Reference in New Issue
Block a user