My tweak of these error messages in commit c359a1b082 contained the
thinko that a query would always have rowMarks set for a query
containing a locking clause. Not so: when declaring a cursor, for
instance, rowMarks isn't set at the point we're checking, so we'd be
dereferencing a NULL pointer.
The fix is to pass the lock strength to the function raising the error,
instead of trying to reverse-engineer it. The result not only is more
robust, but it also seems cleaner overall.
Per report from Robert Haas.
We should really be reporting a useful error along with returning
a valid return code if pthread_mutex_lock() throws an error for
some reason. Add that and back-patch to 9.0 as the prior patch.
Pointed out by Alvaro Herrera
We now use MVCC catalog scans, and, per discussion, have eliminated
all other remaining uses of SnapshotNow, so that we can now get rid of
it. This will break third-party code which is still using it, which
is intentional, as we want such code to be updated to do things the
new way.
I've been working with Nick Phillips on an issue he ran into when
trying to use threads with SSL client certificates. As it turns out,
the call in initialize_SSL() to SSL_CTX_use_certificate_chain_file()
will modify our SSL_context without any protection from other threads
also calling that function or being at some other point and trying to
read from SSL_context.
To protect against this, I've written up the attached (based on an
initial patch from Nick and much subsequent discussion) which puts
locks around SSL_CTX_use_certificate_chain_file() and all of the other
users of SSL_context which weren't already protected.
Nick Phillips, much reworked by Stephen Frost
Back-patch to 9.0 where we started loading the cert directly instead of
using a callback.
As pointed out by Tom Lane, we can allow other users of the error
handler callbacks to provide their own memory context by adding
the context to use to ErrorData and using that instead of explicitly
using ErrorContext.
This then allows GetErrorContextStack() to be called from inside
exception handlers, so modify plpgsql to take advantage of that and
add an associated regression test for it.
We'd find the same match twice if it was of zero length and not immediately
adjacent to the previous match. replace_text_regexp() got similar cases
right, so adjust this search logic to match that. Note that even though
the regexp_split_to_xxx() functions share this code, they did not display
equivalent misbehavior, because the second match would be considered
degenerate and ignored.
Jeevan Chalke, with some cosmetic changes by me.
Refactoring as part of commit 8ceb24568054232696dddc1166a8563bc78c900a
had the unintended effect of making REINDEX TABLE and REINDEX DATABASE
no longer validate constraints enforced by the indexes in question;
REINDEX INDEX still did so. Indexes marked invalid remained so, and
constraint violations arising from data corruption went undetected.
Back-patch to 9.0, like the causative commit.
plpgsql often just remembers SPI-result tuple tables in local variables,
and has no mechanism for freeing them if an ereport(ERROR) causes an escape
out of the execution function whose local variable it is. In the original
coding, that wasn't a problem because the tuple table would be cleaned up
when the function's SPI context went away during transaction abort.
However, once plpgsql grew the ability to trap exceptions, repeated
trapping of errors within a function could result in significant
intra-function-call memory leakage, as illustrated in bug #8279 from
Chad Wagner.
We could fix this locally in plpgsql with a bunch of PG_TRY/PG_CATCH
coding, but that would be tedious, probably slow, and prone to bugs of
omission; moreover it would do nothing for similar risks elsewhere.
What seems like a better plan is to make SPI itself responsible for
freeing tuple tables at subtransaction abort. This patch attacks the
problem that way, keeping a list of live tuple tables within each SPI
function context. Currently, such freeing is automatic for tuple tables
made within the failed subtransaction. We might later add a SPI call to
mark a tuple table as not to be freed this way, allowing callers to opt
out; but until someone exhibits a clear use-case for such behavior, it
doesn't seem worth bothering.
A very useful side-effect of this change is that SPI_freetuptable() can
now defend itself against bad calls, such as duplicate free requests;
this should make things more robust in many places. (In particular,
this reduces the risks involved if a third-party extension contains
now-redundant SPI_freetuptable() calls in error cleanup code.)
Even though the leakage problem is of long standing, it seems imprudent
to back-patch this into stable branches, since it does represent an API
semantics change for SPI users. We'll patch this in 9.3, but live with
the leakage in older branches.
This has a slight performance cost, but the only known consumers
of these functions, known at the SQL level as currtid and currtid2,
is pgsql-odbc; whose usage, we hope, is not sufficiently intensive
to make this a problem.
Per discussion.
Instead, use the active snapshot. Per Tom Lane, this function is
most interested in knowing the range of tuples our scan will actually
see.
This is another step towards full removal of SnapshotNow.
As GetErrorContextStack() borrowed setup and tear-down code from other
places, it was less than clear that it must only be called as a
top-level entry point into the error system and can't be called by an
exception handler (unlike the rest of the error system, which is set up
to be reentrant-safe).
Being called from an exception handler is outside the charter of
GetErrorContextStack(), so add a bit more protection against it,
improve the comments addressing why we have to set up an errordata
stack for this function at all, and add a few more regression tests.
Lack of clarity pointed out by Tom Lane; all bugs are mine.
This adds the ability to get the call stack as a string from within a
PL/PgSQL function, which can be handy for logging to a table, or to
include in a useful message to an end-user.
Pavel Stehule, reviewed by Rushabh Lathia and rather heavily whacked
around by Stephen Frost.
Previously one had to use slist_delete(), implying an additional scan of
the list, making this infrastructure considerably less efficient than
traditional Lists when deletion of element(s) in a long list is needed.
Modify the slist_foreach_modify() macro to support deleting the current
element in O(1) time, by keeping a "prev" pointer in addition to "cur"
and "next". Although this makes iteration with this macro a bit slower,
no real harm is done, since in any scenario where you're not going to
delete the current list element you might as well just use slist_foreach
instead. Improve the comments about when to use each macro.
Back-patch to 9.3 so that we'll have consistent semantics in all branches
that provide ilist.h. Note this is an ABI break for callers of
slist_foreach_modify().
Andres Freund and Tom Lane
In a boolean column that contains mostly nulls, ANALYZE might not find
enough non-null values to populate the most-common-values stats,
but it would still create a pg_statistic entry with stanullfrac set.
The logic in booltestsel() for this situation did the wrong thing for
"col IS NOT TRUE" and "col IS NOT FALSE" tests, forgetting that null
values would satisfy these tests (so that the true selectivity would
be close to one, not close to zero). Per bug #8274.
Fix by Andrew Gierth, some comment-smithing by me.
Use of this function has spread into the parser and rewriter, so it seems
like time to pull it out of the optimizer and put it into the more central
nodeFuncs module. This eliminates the need to #include optimizer/clauses.h
in most of the calling files, demonstrating that this function was indeed a
bit outside the normal code reference patterns.
After further thought about implicit coercions appearing in a joinaliasvars
list, I realized that they represent an additional reason why we might need
to reference the join output column directly instead of referencing an
underlying column. Consider SELECT x FROM t1 LEFT JOIN t2 USING (x) where
t1.x is of type date while t2.x is of type timestamptz. The merged output
variable is of type timestamptz, but it won't go to null when t2 does,
therefore neither t1.x nor t2.x is a valid substitute reference.
The code in get_variable() actually gets this case right, since it knows
it shouldn't look through a coercion, but we failed to ensure that the
unqualified output column name would be globally unique. To fix, modify
the code that trawls for a dangerous situation so that it actually scans
through an unnamed join's joinaliasvars list to see if there are any
non-simple-Var entries.
It's possible to drop a column from an input table of a JOIN clause in a
view, if that column is nowhere actually referenced in the view. But it
will still be there in the JOIN clause's joinaliasvars list. We used to
replace such entries with NULL Const nodes, which is handy for generation
of RowExpr expansion of a whole-row reference to the view. The trouble
with that is that it can't be distinguished from the situation after
subquery pull-up of a constant subquery output expression below the JOIN.
Instead, replace such joinaliasvars with null pointers (empty expression
trees), which can't be confused with pulled-up expressions. expandRTE()
still emits the old convention, though, for convenience of RowExpr
generation and to reduce the risk of breaking extension code.
In HEAD and 9.3, this patch also fixes a problem with some new code in
ruleutils.c that was failing to cope with implicitly-casted joinaliasvars
entries, as per recent report from Feike Steenbergen. That oversight was
because of an inadequate description of the data structure in parsenodes.h,
which I've now corrected. There were some pre-existing oversights of the
same ilk elsewhere, which I believe are now all fixed.
In commit 0ac5ad5134 I changed some error messages from "FOR
UPDATE/SHARE" to a rather long gobbledygook which nobody liked. Then,
in commit cb9b66d31 I changed them again, but the alternative chosen
there was deemed suboptimal by Peter Eisentraut, who in message
1373937980.20441.8.camel@vanquo.pezone.net proposed an alternative
involving a dynamically-constructed string based on the actual locking
strength specified in the SQL command. This patch implements that
suggestion.
As far as I can determine, there's no code in the core distribution
that fails to explicitly set the snapshot of a scan or executor
state. If there is any such code, this will probably cause it to
seg fault; friendlier suggestions were discussed on pgsql-hackers,
but there was no consensus that anything more than this was
needed.
This is another step towards the hoped-for complete removal of
SnapshotNow.
PGTYPEStimestamp_defmt_scan() was declared twice inside different .c
files, with slightly different prototypes. Move it into a header file
and correct the prototype.
Per discussion on pgsql-hackers, these aren't really needed. Interim
versions of the background worker patch had the worker starting with
signals already unblocked, which would have made this necessary.
But the final version does not, so we don't really need it; and it
doesn't work well with the new facility for starting dynamic background
workers, so just rip it out.
Also per discussion on pgsql-hackers, back-patch this change to 9.3.
It's best to get the API break out of the way before we do an
official release of this facility, to avoid more pain for extension
authors later.
Previously, these functions took a HeapTupleHeader, but upcoming
patches for logical replication will introduce new a new snapshot
type under which the tuple's TID will be used to lookup (CMIN, CMAX)
for visibility determination purposes. This makes that information
available. Code churn is minimal since HeapTupleSatisfiesVisibility
took the HeapTuple anyway, and deferenced it before calling the
satisfies function.
Independently of logical replication, this allows t_tableOid and
t_self to be cross-checked via assertions in tqual.c. This seems
like a useful way to make sure that all callers are setting these
values properly, which has been previously put forward as
desirable.
Andres Freund, reviewed by Álvaro Herrera
Also, tweak wording in comments (per Andres) and documentation (myself)
to point out that it's the database's default tablespace that can be
passed as 0, not DEFAULTTABLESPACE_OID. Robert Haas noticed the bug in
the code, but didn't update the accompanying prose.
Future patches are expected to introduce logical replication that
works by decoding WAL. WAL contains relfilenodes rather than relation
OIDs, so this infrastructure will be needed to find the relation OID
based on WAL contents.
If logical replication does not make it into this release, we probably
should consider reverting this, since it will add some overhead to DDL
operations that create new relations. One additional index insert per
pg_class row is not a large overhead, but it's more than zero.
Another way of meeting the needs of logical replication would be to
the relation OID to WAL, but that would burden DML operations, not
only DDL.
Andres Freund, with some changes by me. Design review, in earlier
versions, by Álvaro Herrera.
If an error is thrown out of the datatype I/O functions called by this
function, we need to do subtransaction cleanup, which the previous coding
entirely failed to do. Fortunately, both existing callers of this function
already have proper cleanup logic, so re-throwing the exception is enough.
Also, postpone creation of the resultset tupdesc until after the I/O
conversions are complete, so that we won't leak memory in TopMemoryContext
when such an error happens.
The new JSON API uses a bit of an unusual typedef scheme, where for
example OkeysState is a pointer to okeysState. And that's not applied
consistently either. Change that to the more usual PostgreSQL style
where struct typedefs are upper case, and use pointers explicitly.
By using only the macro that checks infomask bits
HEAP_XMAX_IS_LOCKED_ONLY to verify whether a multixact is not an
updater, and not the full HeapTupleHeaderIsOnlyLocked, it would come to
the wrong result in case of a multixact containing an aborted update;
therefore returning the wrong result code. This would cause predicate.c
to break completely (as in bug report #8273 from David Leverton), and
certain index builds would misbehave. As far as I can tell, other
callers of the bogus routine would make harmless mistakes or not be
affected by the difference at all; so this was a pretty narrow case.
Also, no other user of the HEAP_XMAX_IS_LOCKED_ONLY macro is as
careless; they all check specifically for the HEAP_XMAX_IS_MULTI case,
and they all verify whether the updater is InvalidXid before concluding
that it's a valid updater. So there doesn't seem to be any similar bug.
This is mainly to suppress "uninitialized variable" warnings from very
recent versions of gcc. But it seems like a good robustness thing anyway,
not to mention that we might someday decide to support 6-byte UTF8.
Per report from Karol Trzcionka. No back-patch since there's no reason
at the moment to think this is more than cosmetic.
An ancient logic error in cfindloop() could cause the regex engine to fail
to find matches that begin later than the start of the string. This
function is only used when the regex pattern contains a back reference,
and so far as we can tell the error is only reachable if the pattern is
non-greedy (i.e. its first quantifier uses the ? modifier). Furthermore,
the actual match must begin after some potential match that satisfies the
DFA but then fails the back-reference's match test.
Reported and fixed by Jeevan Chalke, with cosmetic adjustments by me.
For simple views which are automatically updatable, this patch allows
the user to specify what level of checking should be done on records
being inserted or updated. For 'LOCAL CHECK', new tuples are validated
against the conditionals of the view they are being inserted into, while
for 'CASCADED CHECK' the new tuples are validated against the
conditionals for all views involved (from the top down).
This option is part of the SQL specification.
Dean Rasheed, reviewed by Pavel Stehule
This is more efficient and simpler . It does mean that an untyped NULL
can no longer be used in such cases, which should be mentioned in
Release Notes, but doesn't seem a terrible loss. The workaround is to
cast the NULL to some array type.
Pavel Stehule, reviewed by Jeevan Chalke.
Should use RelationGetIndexPredicate(), since rd_indpred is just a cache
that is not computed until/unless demanded. Per buildfarm failure on
CLOBBER_CACHE_ALWAYS animals; diagnosis and fix by Hitoshi Harada.
On HPPA, implement pg_memory_barrier() as pg_compiler_barrier(), which
should be correct since this arch doesn't do memory access reordering,
and is anyway better than the completely-nonfunctional-on-this-arch
dummy_spinlock code. (But note this patch only fixes things for gcc,
not for builds with HP's compiler.)
Also, fix incorrect default definition of pg_memory_barrier as a macro
requiring an argument.
Also, fix incorrect spelling of "#elif" as "#else if" in icc code path
(spotted by pgindent).
This doesn't come close to fixing all of the functional and stylistic
deficiencies in barrier.h, but at least it un-breaks my personal build.
Now that we're actually using barriers in the code, this file is going
to need some serious attention.
Initialization of the first XLOG buffer at end-of-recovery was broken for
the case that the last read WAL record ended at a page boundary. Instead of
trying to copy the last full xlog page to the buffer cache in that case,
just set shared state so that the next page is initialized when the first
WAL record after startup is inserted. (that's what we did in earlier
version, too)
To make the shared state required for that case less surprising, replace the
XLogCtl->curridx variable, which was the index of the latest initialized
buffer, with an XLogRecPtr of how far the buffers have been initialized.
That also allows us to get rid of the XLogRecEndPtrToBufIdx macro.
While we're at it, make a similar change for XLogCtl->Write.curridx, getting
rid of that variable and calculating the next buffer to write from
XLogCtl->LogwrtResult instead.
After the recent pglz optimization patch, the next/prev pointers in the
hash table are never NULL, INVALID_ENTRY_PTR is used to represent invalid
entries instead. The end-of-loop check in pglz_find_match() function didn't
get the memo. The result was the same from a correctness point of view, but
because the NULL-check would never fail, the tiny optimization turned into
a pessimization.
Reported by Stephen Frost, using Coverity scanner.