In the past, we used a 'Lispy' linked list implementation: a "list" was
merely a pointer to the head node of the list. The problem with that
design is that it makes lappend() and length() linear time. This patch
fixes that problem (and others) by maintaining a count of the list
length and a pointer to the tail node along with each head node pointer.
A "list" is now a pointer to a structure containing some meta-data
about the list; the head and tail pointers in that structure refer
to ListCell structures that maintain the actual linked list of nodes.
The function names of the list API have also been changed to, I hope,
be more logically consistent. By default, the old function names are
still available; they will be disabled-by-default once the rest of
the tree has been updated to use the new API names.
o -Allow dump/load of CSV format
This adds new keywords to COPY and \copy:
CSV - enable CSV mode (comma separated variable)
QUOTE - specify quote character
ESCAPE - specify escape character
FORCE - force quoting of specified column
LITERAL - suppress null comparison for columns
Doc changes included. Regression updates coming from Andrew.
the COPY NULL string:
test=> copy pg_language to '/tmp/x' with delimiter '|';
COPY
test=> copy pg_language to '/tmp/x' with delimiter '|' null '|x';
ERROR: COPY delimiter must not appear in the NULL specification
test=> copy pg_language from '/tmp/x' with delimiter '|' null '|x';
ERROR: COPY delimiter must not appear in the NULL specification
It also throws an error if it conflicts with the default NULL string:
test=> copy pg_language to '/tmp/x' with delimiter '\\';
ERROR: COPY delimiter must not appear in the NULL specification
test=> copy pg_language to '/tmp/x' with delimiter '\\' NULL 'x';
COPY
the relcache, and so the notion of 'blind write' is gone. This should
improve efficiency in bgwriter and background checkpoint processes.
Internal restructuring in md.c to remove the not-very-useful array of
MdfdVec objects --- might as well just use pointers.
Also remove the long-dead 'persistent main memory' storage manager (mm.c),
since it seems quite unlikely to ever get resurrected.
whereToSendOutput instead because they are really inquiring about
the correct client communication protocol. Update some comments.
This is pointing towards supporting regular FE/BE client protocol
in a standalone backend, per discussion a month or so back.
in a COPY error message. It seems that glibc gets indigestion if it is
asked to truncate strings that contain invalid UTF-8 encoding sequences.
vsnprintf will return -1 in such cases, leading to looping and eventual
memory overflow in elog.c. Instead use our own, more robust pg_mbcliplen
routine. I believe this problem accounts for several recent reports of
unexpected 'out of memory' errors during COPY IN.
before it is de-backslashed, not after. This allows the null string \N
to be reliably distinguished from the data value \N (which must be
represented as \\N). Per bug report from Manfred Koizar ... but it's
amazing this hasn't been reported before ...
Also, be consistent about encoding conversion for null string: the form
specified in the command is in the server encoding, but what is sent
to/from client must be in client encoding. This never worked quite
right before either.
via extended query protocol, because it sends Sync right after Execute
without realizing that the command to be executed is COPY. There seems
to be no reasonable way for it to realize that, either, so the best fix
seems to be to make the backend ignore Sync during copy-in mode. Bit of
a wart on the protocol, but little alternative. Also, libpq must send
another Sync after terminating the COPY, if the command was issued via
Execute.
handle multiple 'formats' for data I/O. Restructure CommandDest and
DestReceiver stuff one more time (it's finally starting to look a bit
clean though). Code now matches latest 3.0 protocol document as far
as message formats go --- but there is no support for binary I/O yet.
rewritten and the protocol is changed, but most elog calls are still
elog calls. Also, we need to contemplate mechanisms for controlling
all this functionality --- eg, how much stuff should appear in the
postmaster log? And what API should libpq expose for it?
have length words. COPY OUT reimplemented per new protocol: it doesn't
need \. anymore, thank goodness. COPY BINARY to/from frontend works,
at least as far as the backend is concerned --- libpq's PQgetline API
is not up to snuff, and will have to be replaced with something that is
null-safe. libpq uses message length words for performance improvement
(no cycles wasted rescanning long messages), but not yet for error
recovery.
Output \r\n termination on Win32.
Disallow literal carriage return as a data value,
backslash-carriage-return and \r still allowed.
Doc changes already committed.
(materialization into a tuple store) discussed on pgsql-hackers earlier.
I've updated the documentation and the regression tests.
Notes on the implementation:
- I needed to change the tuple store API slightly -- it assumes that it
won't be used to hold data across transaction boundaries, so the temp
files that it uses for on-disk storage are automatically reclaimed at
end-of-transaction. I added a flag to tuplestore_begin_heap() to control
this behavior. Is changing the tuple store API in this fashion OK?
- in order to store executor results in a tuple store, I added a new
CommandDest. This works well for the most part, with one exception: the
current DestFunction API doesn't provide enough information to allow the
Executor to store results into an arbitrary tuple store (where the
particular tuple store to use is chosen by the call site of
ExecutorRun). To workaround this, I've temporarily hacked up a solution
that works, but is not ideal: since the receiveTuple DestFunction is
passed the portal name, we can use that to lookup the Portal data
structure for the cursor and then use that to get at the tuple store the
Portal is using. This unnecessarily ties the Portal code with the
tupleReceiver code, but it works...
The proper fix for this is probably to change the DestFunction API --
Tom suggested passing the full QueryDesc to the receiveTuple function.
In that case, callers of ExecutorRun could "subclass" QueryDesc to add
any additional fields that their particular CommandDest needed to get
access to. This approach would work, but I'd like to think about it for
a little bit longer before deciding which route to go. In the mean time,
the code works fine, so I don't think a fix is urgent.
- (semi-related) I added a NO SCROLL keyword to DECLARE CURSOR, and
adjusted the behavior of SCROLL in accordance with the discussion on
-hackers.
- (unrelated) Cleaned up some SGML markup in sql.sgml, copy.sgml
Neil Conway
startup, not in the parser; this allows ALTER DOMAIN to work correctly
with domain constraint operations stored in rules. Rod Taylor;
code review by Tom Lane.
a per-query memory context created by CreateExecutorState --- and destroyed
by FreeExecutorState. This provides a final solution to the longstanding
problem of memory leaked by various ExecEndNode calls.
execution state trees, and ExecEvalExpr takes an expression state tree
not an expression plan tree. The plan tree is now read-only as far as
the executor is concerned. Next step is to begin actually exploiting
this property.
so that all executable expression nodes inherit from a common supertype
Expr. This is somewhat of an exercise in code purity rather than any
real functional advance, but getting rid of the extra Oper or Func node
formerly used in each operator or function call should provide at least
a little space and speed improvement.
initdb forced by changes in stored-rules representation.
and eliminate its manual pfree() calls. This solves the encoding-conversion
bug recently reported, and should be faster and more robust than the
original coding anyway. For example, we are no longer at risk if
datatype output routines leak memory or choose to return a constant string.
sublink results and COPY's domain constraint checking. A Const that
isn't really constant is just a Bad Idea(tm). Remove hacks in
parse_coerce and other places that were needed because of the former
klugery.
-hackers a couple days ago.
Notes/caveats:
- added regression tests for the new functionality, all
regression tests pass on my machine
- added pg_dump support
- updated PL/PgSQL to support per-statement triggers; didn't
look at the other procedural languages.
- there's (even) more code duplication in trigger.c than there
was previously. Any suggestions on how to refactor the
ExecXXXTriggers() functions to reuse more code would be
welcome -- I took a brief look at it, but couldn't see an
easy way to do it (there are several subtly-different
versions of the code in question)
- updated the documentation. I also took the liberty of
removing a big chunk of duplicated syntax documentation in
the Programmer's Guide on triggers, and moving that
information to the CREATE TRIGGER reference page.
- I also included some spelling fixes and similar small
cleanups I noticed while making the changes. If you'd like
me to split those into a separate patch, let me know.
Neil Conway
query that uses it. This ensures that triggers will be applied consistently
throughout a query even if someone commits changes to the relation's
pg_class.reltriggers field meanwhile. Per crash report from Laurette Cisneros.
While at it, simplify memory management in relcache.c, which no longer
needs the old hack to try to keep trigger info in the same place over
a relcache entry rebuild. (Should try to fix rd_att and rewrite-rule
access similarly, someday.) And make RelationBuildTriggers simpler and
more robust by making it build the trigdesc in working memory and then
CopyTriggerDesc() into cache memory.