Compare commits

...

8 Commits

Author SHA1 Message Date
Michael Paquier
2f35c14cfb seg: Add test "security" in meson.build
Oversight in 681d9e4621aa where the test has been added.

Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/ZK5AgYxG4zLErD5O@telsasoft.com
Backpatch-through: 16
2024-01-18 10:12:44 +09:00
Alexander Korotkov
4b885d01f9 Remove the flaky check in event_trigger_login regression test
The query checks that pg_database.dathasloginevt is unset on connect when
there are no event triggers.  However, unsetting this flag is implemented in
a non-blocking way, so a concurrent autovacuum connection breaks this check.
It doesn't seem we can do much with this, at least within a regression test.
So, remove it.

Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/44807d19-81a6-3884-3e0f-22dd99aac3ed%40gmail.com
2024-01-17 23:16:53 +02:00
Alexander Korotkov
58fbbc9d68 Fix spelling in notice
Reported-by: Atsushi Torikoshi
Discussion: https://postgr.es/m/762d7dd4d5aa9e5ecffec2ae6a255a28%40oss.nttdata.com
2024-01-17 22:59:09 +02:00
Heikki Linnakangas
2b53a462cf Fix incorrect comment on how BackendStatusArray is indexed
The comment was copy-pasted from the call to ProcSignalInit() in
AuxiliaryProcessMain(), which uses a similar scheme of having reserved
slots for aux processes after MaxBackends slots for backends. However,
ProcSignalInit() indexing starts from 1, whereas BackendStatusArray
starts from 0. The code is correct, but the comment was wrong.

Discussion: https://www.postgresql.org/message-id/f3ecd4cb-85ee-4e54-8278-5fabfb3a4ed0@iki.fi
Backpatch-through: v14
2024-01-17 15:44:10 +02:00
Daniel Gustafsson
7cfa154d15 Close socket in case of errors in setting non-blocking
If configuring the newly created socket non-blocking fails we
error out and return INVALID_SOCKET, but the socket that had
been created wasn't closed. Fix by issuing closesocket in the
errorpath.

Backpatch to all supported branches.

Author: Ranier Vilela <ranier.vf@gmail.com>
Discussion: https://postgr.es/m/CAEudQApmU5CrKefH85VbNYE2y8H=-qqEJbg6RAPU65+vCe+89A@mail.gmail.com
Backpatch-through: v12
2024-01-17 11:24:11 +01:00
Michael Paquier
44ad5129ce Fix description of DecodeInsert() in decode.c
This incorrectly referred to deletes.

Author: Yongtao Huang
Reviewed-by: Richard Guo
Description: https://postgr.es/m/CAOe1Go0Czgvo9eiDqeFpaABwJu=gBK6qjrYzZGZLn=tKDX8AUw@mail.gmail.com
2024-01-17 17:03:02 +09:00
Michael Paquier
061cc7eaca Remove some comments related to pqPipelineSync() and PQsendPipelineSync()
These comments explained how these functions behave internally, and the
equivalent is described in the documentation section dedicated to the
pipeline mode of libpq.  Let's remove these comments, getting rid of the
duplication with the docs.

Reported-by: Álvaro Herrera
Reviewed-by: Álvaro Herrera
Discussion: https://postgr.es/m/202401150949.wq7ynlmqxphy@alvherre.pgsql
2024-01-17 15:53:59 +09:00
Michael Paquier
2197d06224 Add support for parsing of large XML data (>= 10MB)
This commit adds XML_PARSE_HUGE to the libxml2 functions used in core
for the parsing of XML objects, raising up the original limit of 10MB
supported by libxml2.

In most code paths of upstream, XML_MAX_TEXT_LENGTH (10^7) is the
historical limit that gets upgraded to XML_MAX_HUGE_LENGTH (10^9) once
XML_PARSE_HUGE is given to the parser calls.  These are still limited by
any palloc() calls for text, up to 1GB.

This offers the possibility to handle within the backend XML objects
larger than 10MB in general, with also a higher depth limit.  This
change affects the contrib module xml2, the xml data type and SQL/XML.

Author: Dmitry Koval
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/18274-98d16bc03520665f@postgresql.org
2024-01-17 14:03:55 +09:00
11 changed files with 40 additions and 42 deletions

View File

@ -53,6 +53,7 @@ tests += {
'bd': meson.current_build_dir(),
'regress': {
'sql': [
'security',
'seg',
],
},

View File

@ -381,7 +381,7 @@ pgxml_xpath(text *document, xmlChar *xpath, xpath_workspace *workspace)
{
workspace->doctree = xmlReadMemory((char *) VARDATA_ANY(document),
docsize, NULL, NULL,
XML_PARSE_NOENT);
XML_PARSE_HUGE | XML_PARSE_NOENT);
if (workspace->doctree != NULL)
{
workspace->ctxt = xmlXPathNewContext(workspace->doctree);
@ -626,7 +626,7 @@ xpath_table(PG_FUNCTION_ARGS)
if (xmldoc)
doctree = xmlReadMemory(xmldoc, strlen(xmldoc),
NULL, NULL,
XML_PARSE_NOENT);
XML_PARSE_HUGE | XML_PARSE_NOENT);
else /* treat NULL as not well-formed */
doctree = NULL;

View File

@ -87,7 +87,7 @@ xslt_process(PG_FUNCTION_ARGS)
/* Parse document */
doctree = xmlReadMemory((char *) VARDATA_ANY(doct),
VARSIZE_ANY_EXHDR(doct), NULL, NULL,
XML_PARSE_NOENT);
XML_PARSE_HUGE | XML_PARSE_NOENT);
if (doctree == NULL)
xml_ereport(xmlerrcxt, ERROR, ERRCODE_EXTERNAL_ROUTINE_EXCEPTION,
@ -96,7 +96,7 @@ xslt_process(PG_FUNCTION_ARGS)
/* Same for stylesheet */
ssdoc = xmlReadMemory((char *) VARDATA_ANY(ssheet),
VARSIZE_ANY_EXHDR(ssheet), NULL, NULL,
XML_PARSE_NOENT);
XML_PARSE_HUGE | XML_PARSE_NOENT);
if (ssdoc == NULL)
xml_ereport(xmlerrcxt, ERROR, ERRCODE_EXTERNAL_ROUTINE_EXCEPTION,

View File

@ -1310,7 +1310,7 @@ CopyFrom(CopyFromState cstate)
if (cstate->opts.save_error_to != COPY_SAVE_ERROR_TO_ERROR &&
cstate->num_errors > 0)
ereport(NOTICE,
errmsg_plural("%llu row were skipped due to data type incompatibility",
errmsg_plural("%llu row was skipped due to data type incompatibility",
"%llu rows were skipped due to data type incompatibility",
(unsigned long long) cstate->num_errors,
(unsigned long long) cstate->num_errors));

View File

@ -303,6 +303,7 @@ pgwin32_socket(int af, int type, int protocol)
if (ioctlsocket(s, FIONBIO, &on))
{
TranslateSocketError();
closesocket(s);
return INVALID_SOCKET;
}
errno = 0;

View File

@ -890,7 +890,7 @@ DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,
/*
* Parse XLOG_HEAP_INSERT (not MULTI_INSERT!) records into tuplebufs.
*
* Deletes can contain the new tuple.
* Inserts can contain the new tuple.
*/
static void
DecodeInsert(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)

View File

@ -263,9 +263,9 @@ pgstat_beinit(void)
* Assign the MyBEEntry for an auxiliary process. Since it doesn't
* have a BackendId, the slot is statically allocated based on the
* auxiliary process type (MyAuxProcType). Backends use slots indexed
* in the range from 1 to MaxBackends (inclusive), so we use
* MaxBackends + AuxBackendType + 1 as the index of the slot for an
* auxiliary process.
* in the range from 0 to MaxBackends (exclusive), so we use
* MaxBackends + AuxProcType as the index of the slot for an auxiliary
* process.
*/
MyBEEntry = &BackendStatusArray[MaxBackends + MyAuxProcType];
}

View File

@ -1688,8 +1688,8 @@ xml_doctype_in_content(const xmlChar *str)
* xmloption_arg, but a DOCTYPE node in the input can force DOCUMENT mode).
*
* If parsed_nodes isn't NULL and the input is not an XML document, the list
* of parsed nodes from the xmlParseBalancedChunkMemory call will be returned
* to *parsed_nodes.
* of parsed nodes from the xmlParseInNodeContext call will be returned to
* *parsed_nodes.
*
* Errors normally result in ereport(ERROR), but if escontext is an
* ErrorSaveContext, then "safe" errors are reported there instead, and the
@ -1795,7 +1795,7 @@ xml_parse(text *data, XmlOptionType xmloption_arg,
doc = xmlCtxtReadDoc(ctxt, utf8string,
NULL,
"UTF-8",
XML_PARSE_NOENT | XML_PARSE_DTDATTR
XML_PARSE_NOENT | XML_PARSE_DTDATTR | XML_PARSE_HUGE
| (preserve_whitespace ? 0 : XML_PARSE_NOBLANKS));
if (doc == NULL || xmlerrcxt->err_occurred)
{
@ -1828,10 +1828,30 @@ xml_parse(text *data, XmlOptionType xmloption_arg,
/* allow empty content */
if (*(utf8string + count))
{
res_code = xmlParseBalancedChunkMemory(doc, NULL, NULL, 0,
utf8string + count,
parsed_nodes);
if (res_code != 0 || xmlerrcxt->err_occurred)
const char *data;
xmlNodePtr root;
xmlNodePtr lst;
xmlParserErrors xml_error;
data = (const char *) (utf8string + count);
/*
* Create a fake root node. The xmlNewDoc() function creates
* an XML document without any nodes, and this is required for
* xmlParseInNodeContext() that is able to handle
* XML_PARSE_HUGE.
*/
root = xmlNewNode(NULL, (const xmlChar *) "content-root");
if (root == NULL || xmlerrcxt->err_occurred)
xml_ereport(xmlerrcxt, ERROR, ERRCODE_OUT_OF_MEMORY,
"could not allocate xml node");
xmlDocSetRootElement(doc, root);
/* Try to parse string with using root node context. */
xml_error = xmlParseInNodeContext(root, data, strlen(data),
XML_PARSE_HUGE,
parsed_nodes ? parsed_nodes : &lst);
if (xml_error != XML_ERR_OK || xmlerrcxt->err_occurred)
{
xml_errsave(escontext, xmlerrcxt,
ERRCODE_INVALID_XML_CONTENT,
@ -4344,7 +4364,7 @@ xpath_internal(text *xpath_expr_text, xmltype *data, ArrayType *namespaces,
xml_ereport(xmlerrcxt, ERROR, ERRCODE_OUT_OF_MEMORY,
"could not allocate parser context");
doc = xmlCtxtReadMemory(ctxt, (char *) string + xmldecl_len,
len - xmldecl_len, NULL, NULL, 0);
len - xmldecl_len, NULL, NULL, XML_PARSE_HUGE);
if (doc == NULL || xmlerrcxt->err_occurred)
xml_ereport(xmlerrcxt, ERROR, ERRCODE_INVALID_XML_DOCUMENT,
"could not parse XML document");
@ -4675,7 +4695,7 @@ XmlTableSetDocument(TableFuncScanState *state, Datum value)
PG_TRY();
{
doc = xmlCtxtReadMemory(xtCxt->ctxt, (char *) xstr, length, NULL, NULL, 0);
doc = xmlCtxtReadMemory(xtCxt->ctxt, (char *) xstr, length, NULL, NULL, XML_PARSE_HUGE);
if (doc == NULL || xtCxt->xmlerrcxt->err_occurred)
xml_ereport(xtCxt->xmlerrcxt, ERROR, ERRCODE_INVALID_XML_DOCUMENT,
"could not parse XML document");

View File

@ -3247,23 +3247,6 @@ PQsendPipelineSync(PGconn *conn)
/*
* Workhorse function for PQpipelineSync and PQsendPipelineSync.
*
* It's legal to start submitting more commands in the pipeline immediately,
* without waiting for the results of the current pipeline. There's no need to
* end pipeline mode and start it again.
*
* If a command in a pipeline fails, every subsequent command up to and
* including the result to the Sync message sent by pqPipelineSyncInternal
* gets set to PGRES_PIPELINE_ABORTED state. If the whole pipeline is
* processed without error, a PGresult with PGRES_PIPELINE_SYNC is produced.
*
* Queries can already have been sent before pqPipelineSyncInternal is called,
* but pqPipelineSyncInternal needs to be called before retrieving command
* results.
*
* The connection will remain in pipeline mode and unavailable for new
* synchronous command execution functions until all results from the pipeline
* are processed by the client.
*
* immediate_flush controls if the flush happens immediately after sending the
* Sync message or not.
*/

View File

@ -37,9 +37,3 @@ DROP TABLE user_logins;
DROP EVENT TRIGGER on_login_trigger;
DROP FUNCTION on_login_proc();
\c
SELECT dathasloginevt FROM pg_database WHERE datname= :'DBNAME';
dathasloginevt
----------------
f
(1 row)

View File

@ -22,4 +22,3 @@ DROP TABLE user_logins;
DROP EVENT TRIGGER on_login_trigger;
DROP FUNCTION on_login_proc();
\c
SELECT dathasloginevt FROM pg_database WHERE datname= :'DBNAME';