548 Commits

Author SHA1 Message Date
Jan Wieck
3e99158548 update_pg_pwd() is an AR trigger. Corrected return type.
Jan
1999-12-21 22:39:02 +00:00
Jan Wieck
7c385f73e5 Required catalog changes for extended LONG attribute storage.
Jan
1999-12-20 10:40:43 +00:00
Tom Lane
939229904a Clean up some minor gcc warnings. 1999-12-20 01:23:04 +00:00
Jan Wieck
397e9b32a3 Some changes to prepare for LONG attributes.
Jan
1999-12-16 22:20:03 +00:00
Bruce Momjian
99b8f84511 Here's the Create/Alter/Drop Group stuff that's been really overdue. I
didn't have time for documentation yet, but I'll write some. There are
still some things to work out what happens when you alter or drop users,
but the group stuff in and by itself is done.

--
Peter Eisentraut                  Sernanders väg 10:115
1999-12-16 17:24:19 +00:00
Tom Lane
7431796b46 fix_parsetree_attnums was not nearly smart enough about walking parse
trees.  Also rewrite find_all_inheritors() in a more intelligible style.
1999-12-14 03:35:28 +00:00
Bruce Momjian
549a8ba59a > From what I gather, this should be a little cleaner because the
triggered
> function now returns the right datatype.

Oops, I got crossed up with Jan's improvements. Ignore this.

--
Peter Eisentraut                  Sernanders väg 10:115
peter_e@gmx.net                   75262 Uppsala
1999-12-14 00:17:33 +00:00
Bruce Momjian
f5a613c0ed >From what I gather, this should be a little cleaner because the
triggered
function now returns the right datatype.

--
Peter Eisentraut                  Sernanders väg 10:115
1999-12-14 00:12:06 +00:00
Bruce Momjian
bcaabc5698 Depending on my interpreting (and programming) skills, this might solve
anywhere from zero to two TODO items.

* Allow flag to control COPY input/output of NULLs

I got this:
COPY table .... [ WITH NULL AS 'string' ]
which does what you'd expect. The default is \N, otherwise you can use
empty strings, etc. On Copy In this acts like a filter: every data item
that looks like 'string' becomes a NULL. Pretty straightforward.

This also seems to be related to

* Make postgres user have a password by default

If I recall this discussion correctly, the problem was actually that the
default password for the postgres (or any) user is in fact "\N", because
of the way copy is used. With this change, the file pg_pwd is copied out
with nulls as empty strings, so if someone doesn't have a password, the
password is just '', which one would expect from a new account. I don't
think anyone really wants a hard-coded default password.

Peter Eisentraut                  Sernanders väg 10:115
1999-12-14 00:08:21 +00:00
Bruce Momjian
a82f9ffde6 New LDOUT makefile variable for QNX os. 1999-12-13 22:35:27 +00:00
Bruce Momjian
cb00b7faa5 I'm in TODO mood today ...
* Document/trigger/rule so changes to pg_shadow recreate pg_pwd

I did it with a trigger and it seems to work like a charm. The function
that already updates the file for create and alter user has been made a
built-in "SQL" function and a trigger is created at initdb time.

Comments around the pg_pwd updating function seem to be worried about
this
routine being called concurrently, but I really don't see a reason to
worry about this. Verify for yourself. I guess we never had a system
trigger before, so treat this with care, and feel free to adjust the
nomenclature as well.

--
Peter Eisentraut                  Sernanders väg 10:115
1999-12-12 05:57:36 +00:00
Bruce Momjian
11023eb1f5 Meanwhile, database names with single quotes in names don't work very well
at all, and because of shell quoting rules this can't be fixed, so I put
in error messages to that end.

Also, calling create or drop database in a transaction block is not so
good either, because the file system mysteriously refuses to roll back rm
calls on transaction aborts. :) So I put in checks to see if a transaction
is in progress and signal an error.

Also I put the whole call in a transaction of its own to be able to roll
back changes to pg_database in case the file system operations fail.

The alternative location issues I posted recently were untouched, awaiting
the outcome of that discussion. Other than that, this should be much more
fool-proof now.

The docs I cleaned up as well.

Peter Eisentraut                  Sernanders väg 10:115
1999-12-12 05:15:10 +00:00
Jan Wieck
62c42a05a2 Added global variable to have RI triggers override
time qualification of HeapTupleSatisfiesSnapshot()

Jan
1999-12-10 12:34:15 +00:00
Bruce Momjian
97dec77fab Rename several destroy* functions/tags to drop*. 1999-12-10 03:56:14 +00:00
Bruce Momjian
3ffd3d82db Make LD -r as macros that can be changed for QNX. 1999-12-09 19:15:45 +00:00
Jan Wieck
b8ef7e7f82 Completed FOREIGN KEY syntax.
Added functionality for automatic trigger creation during CREATE TABLE.

Added ON DELETE RESTRICT and some others.

Jan
1999-12-06 18:02:47 +00:00
Bruce Momjian
4901ff77bd Mention index name when reporting corruption. 1999-12-01 00:29:54 +00:00
Bruce Momjian
1f64926953 Fix compile error on older patch. 1999-11-30 04:29:57 +00:00
Bruce Momjian
eebfb9baa5 create/alter user extension
This one should work much better than the one I sent in previously. The
functionality is the same, but the patch was missing one file resulting
in
the compilation failing. The docs also received a minor fix.

Peter Eisentraut                  Sernanders väg 10:115
1999-11-30 03:57:29 +00:00
Tom Lane
d367f626f4 Add permissions check: now one must be the Postgres superuser or the
table owner in order to vacuum a table.  This is mainly to prevent
denial-of-service attacks via repeated vacuums.  Allow VACUUM to gather
statistics about system relations, except for pg_statistic itself ---
not clear that it's worth the trouble to make that case work cleanly.
Cope with possible tuple size overflow in pg_statistic tuples; I'm
surprised we never realized that could happen.  Hold a couple of locks
a little longer to try to prevent deadlocks between concurrent VACUUMs.
There still seem to be some problems in that last area though :-(
1999-11-29 04:43:15 +00:00
Tom Lane
aa903cf07c Remove pg_vlock locking from VACUUM, allowing multiple VACUUMs to run in
parallel --- and, not incidentally, removing a common reason for needing
manual cleanup by the DB admin after a crash.  Remove initial global
delete of pg_statistics rows in VACUUM ANALYZE; this was not only bad
for performance of other backends that had to run without stats for a
while, but it was fundamentally broken because it was done outside any
transaction.  Surprising we didn't see more consequences of that.
Detect attempt to run VACUUM inside a transaction block.  Check for
query cancel request before starting vacuum of each table.  Clean up
vacuum's private portal storage if vacuum is aborted.
1999-11-28 02:10:01 +00:00
Tom Lane
4dded12faa COPY to a relation should keep write lock till transaction commit.
Thanks to Hiroshi for spotting the problem.
1999-11-27 21:52:53 +00:00
Bruce Momjian
922e53e6ea Enable pg_statistic cache use. 1999-11-25 00:15:57 +00:00
Bruce Momjian
74f418eb9a Add pg_statistic index, add missing Hiroshi file. 1999-11-24 16:52:50 +00:00
Bruce Momjian
bb10bf319e Rename heap_replace to heap_update. 1999-11-24 00:44:37 +00:00
Bruce Momjian
6f9ff92cc0 Tid access method feature from Hiroshi Inoue, Inoue@tpf.co.jp 1999-11-23 20:07:06 +00:00
Bruce Momjian
fc955b14ea Add system indexes to match all caches.
Make all system indexes unique.
Make all cache loads use system indexes.
Rename *rel to *relid in inheritance tables.
Rename cache names to be clearer.
1999-11-22 17:56:41 +00:00
Tom Lane
d8ba3dfb0b Change backend-side COPY to write files with permissions 644 not 666
(whoever thought world-writable files were a good default????).  Modify
the pg_pwd code so that pg_pwd is created with 600 permissions.  Modify
initdb so that permissions on a pre-existing PGDATA directory are not
blindly accepted: if the dir is already there, it does chmod go-rwx
to be sure that the permissions are OK and the dir actually is owned
by postgres.
1999-11-21 04:16:17 +00:00
Jan Wieck
73bfcf6b22 Changed pg_rewrite attributes ev_qual and ev_action to the new
compressed lztext data type.

Jan
1999-11-18 13:56:30 +00:00
Bruce Momjian
7a203a3f02 Add recreate index notice to vacuum error. 1999-11-14 17:27:01 +00:00
Bruce Momjian
86ef36c907 New NameStr macro to convert Name to Str. No need for var.data anymore.
Fewer calls to nameout.

Better use of RelationGetRelationName.
1999-11-07 23:08:36 +00:00
Bruce Momjian
d426869b89 Fix compile after COMMENT problem. 1999-10-26 16:32:46 +00:00
Bruce Momjian
577e21b34f Hello.
The following patch extends the COMMENT ON functionality to the
rest of the database objects beyond just tables, columns, and views. The
grammer of the COMMENT ON statement now looks like:

COMMENT ON [
  [ DATABASE | INDEX | RULE | SEQUENCE | TABLE | TYPE | VIEW ] <objname>
|

  COLUMN <relation>.<attribute> |
  AGGREGATE <aggname> <aggtype> |
  FUNCTION <funcname> (arg1, arg2, ...) |
  OPERATOR <op> (leftoperand_typ rightoperand_typ) |
  TRIGGER <triggername> ON relname>

Mike Mascari
(mascarim@yahoo.com)
1999-10-26 03:12:39 +00:00
Tom Lane
51f62d505e Standardize on MAXPGPATH as the size of a file pathname buffer,
eliminating some wildly inconsistent coding in various parts of the
system.  I set MAXPGPATH = 1024 in config.h.in.  If anyone is really
convinced that there ought to be a configure-time test to set the
value, go right ahead ... but I think it's a waste of time.
1999-10-25 03:08:03 +00:00
Bruce Momjian
7acc237744 This patch implements ORACLE's COMMENT SQL command.
>From the ORACLE 7 SQL Language Reference Manual:
-----------------------------------------------------
COMMENT

Purpose:

To add a comment about a table, view, snapshot, or
column into the data dictionary.

Prerequisites:

The table, view, or snapshot must be in your own
schema
or you must have COMMENT ANY TABLE system privilege.

Syntax:

COMMENT ON [ TABLE table ] |
           [ COLUMN table.column] IS 'text'

You can effectively drop a comment from the database
by setting it to the empty string ''.
-----------------------------------------------------

Example:

COMMENT ON TABLE workorders IS
   'Maintains base records for workorder information';

COMMENT ON COLUMN workorders.hours IS
   'Number of hours the engineer worked on the task';

to drop a comment:

COMMENT ON COLUMN workorders.hours IS '';

The current patch will simply perform the insert into
pg_description, as per the TODO. And, of course, when
the table is dropped, any comments relating to it
or any of its attributes are also dropped. I haven't
looked at the ODBC source yet, but I do know from
an ODBC client standpoint that the standard does
support the notion of table and column comments.
Hopefully the ODBC driver is already fetching these
values from pg_description, but if not, it should be
trivial.

Hope this makes the grade,

Mike Mascari
(mascarim@yahoo.com)
1999-10-15 01:49:49 +00:00
Tom Lane
3eb1c82277 Fix planner and rewriter to follow SQL semantics for tables that are
mentioned in FROM but not elsewhere in the query: such tables should be
joined over anyway.  Aside from being more standards-compliant, this allows
removal of some very ugly hacks for COUNT(*) processing.  Also, allow
HAVING clause without aggregate functions, since SQL does.  Clean up
CREATE RULE statement-list syntax the same way Bruce just fixed the
main stmtmulti production.
CAUTION: addition of a field to RangeTblEntry nodes breaks stored rules;
you will have to initdb if you have any rules.
1999-10-07 04:23:24 +00:00
Tom Lane
eabc714a91 Reimplement parsing and storage of default expressions and constraint
expressions in CREATE TABLE.  There is no longer an emasculated expression
syntax for these things; it's full a_expr for constraints, and b_expr
for defaults (unfortunately the fact that NOT NULL is a part of the
column constraint syntax causes a shift/reduce conflict if you try a_expr.
Oh well --- at least parenthesized boolean expressions work now).  Also,
stored expression for a column default is not pre-coerced to the column
type; we rely on transformInsertStatement to do that when the default is
actually used.  This means "f1 datetime default 'now'" behaves the way
people usually expect it to.
BTW, all the support code is now there to implement ALTER TABLE ADD
CONSTRAINT and ALTER TABLE ADD COLUMN with a default value.  I didn't
actually teach ALTER TABLE to call it, but it wouldn't be much work.
1999-10-03 23:55:40 +00:00
Tom Lane
6eb8d255d2 Allow CREATE FUNCTION's WITH clause to be used for all language types,
not just C, so that ISCACHABLE attribute can be specified for user-defined
functions.  Get rid of ParamString node type, which wasn't actually being
generated by gram.y anymore, even though define.c thought that was what
it was getting.  Clean up minor bug in dfmgr.c (premature heap_close).
1999-10-02 21:33:33 +00:00
Jan Wieck
ccecf1fa46 Added utils/adt/ri_triggers with empty shells for the
FOREIGN KEY triggers.

Added pg_proc entries for all the new functions.

Jan
1999-09-30 14:54:24 +00:00
Jan Wieck
1547ee017c This is part #1 for of the DEFERRED CONSTRAINT TRIGGER support.
Implements the CREATE CONSTRAINT TRIGGER and SET CONSTRAINTS commands.

TODO:
    Generic builtin trigger procedures
    Automatic execution of appropriate CREATE CONSTRAINT... at CREATE TABLE
    Support of new trigger type in pg_dump
    Swapping of huge # of events to disk

Jan
1999-09-29 16:06:40 +00:00
Vadim B. Mikheev
3fea625e9d Make tree compilable (+WAL). 1999-09-28 11:41:09 +00:00
Bruce Momjian
9394d62c73 I have been working with user defined types and user defined c
functions.  One problem that I have encountered with the function
manager is that it does not allow the user to define type conversion
functions that convert between user types. For instance if mytype1,
mytype2, and mytype3 are three Postgresql user types, and if I wish to
define Postgresql conversion functions like

I run into problems, because the Postgresql dynamic loader would look
for a single link symbol, mytype3, for both pieces of object code.  If
I just change the name of one of the Postgresql functions (to make the
symbols distinct), the automatic type conversion that Postgresql uses,
for example, when matching operators to arguments no longer finds the
type conversion function.

The solution that I propose, and have implemented in the attatched
patch extends the CREATE FUNCTION syntax as follows. In the first case
above I use the link symbol mytype2_to_mytype3 for the link object
that implements the first conversion function, and define the
Postgresql operator with the following syntax

The patch includes changes to the parser to include the altered
syntax, changes to the ProcedureStmt node in nodes/parsenodes.h,
changes to commands/define.c to handle the extra information in the AS
clause, and changes to utils/fmgr/dfmgr.c that alter the way that the
dynamic loader figures out what link symbol to use.  I store the
string for the link symbol in the prosrc text attribute of the pg_proc
table which is currently unused in rows that reference dynamically
loaded
functions.


Bernie Frankpitt
1999-09-28 04:34:56 +00:00
Bruce Momjian
d62a7ac6d3 Massimo's SET FSYNC and SHOW PG_OPTIONS changes, without SET QUERY_LIMIT. 1999-09-27 20:27:32 +00:00
Bruce Momjian
12a932251c Cancel query support from Massimo 1999-09-27 20:00:44 +00:00
Bruce Momjian
74a263ed34 Fix to give super user and createdb user proper update catalog rights. 1999-09-27 16:44:56 +00:00
Tom Lane
e812458b27 Several changes here, not very related but touching some of the same files.
* Buffer refcount cleanup (per my "progress report" to pghackers, 9/22).
* Add links to backend PROC structs to sinval's array of per-backend info,
and use these links for routines that need to check the state of all
backends (rather than the slow, complicated search of the ShmemIndex
hashtable that was used before).  Add databaseOID to PROC structs.
* Use this to implement an interlock that prevents DESTROY DATABASE of
a database containing running backends.  (It's a little tricky to prevent
a concurrently-starting backend from getting in there, since the new
backend is not able to lock anything at the time it tries to look up
its database in pg_database.  My solution is to recheck that the DB is
OK at the end of InitPostgres.  It may not be a 100% solution, but it's
a lot better than no interlock at all...)
* In ALTER TABLE RENAME, flush buffers for the relation before doing the
rename of the physical files, to ensure we don't get failures later from
mdblindwrt().
* Update TRUNCATE patch so that it actually compiles against current
sources :-(.
You should do "make clean all" after pulling these changes.
1999-09-24 00:25:33 +00:00
Bruce Momjian
e7cad7b0cb Add TRUNCATE command, with psql help and sgml additions. 1999-09-23 17:03:39 +00:00
Tom Lane
bd272cace6 Mega-commit to make heap_open/heap_openr/heap_close take an
additional argument specifying the kind of lock to acquire/release (or
'NoLock' to do no lock processing).  Ensure that all relations are locked
with some appropriate lock level before being examined --- this ensures
that relevant shared-inval messages have been processed and should prevent
problems caused by concurrent VACUUM.  Fix several bugs having to do with
mismatched increment/decrement of relation ref count and mismatched
heap_open/close (which amounts to the same thing).  A bogus ref count on
a relation doesn't matter much *unless* a SI Inval message happens to
arrive at the wrong time, which is probably why we got away with this
sloppiness for so long.  Repair missing grab of AccessExclusiveLock in
DROP TABLE, ALTER/RENAME TABLE, etc, as noted by Hiroshi.
Recommend 'make clean all' after pulling this update; I modified the
Relation struct layout slightly.
Will post further discussion to pghackers list shortly.
1999-09-18 19:08:25 +00:00
Tom Lane
4644fc8071 Eliminate query length limitation imposed by pg_client_to_server
and pg_server_to_client.  Eliminate copy.c's restriction on the length
of a single attribute.
1999-09-11 22:28:11 +00:00
Tom Lane
b399805e22 Eliminate elog()'s hardwired limit on length of an error message.
This change seems necessary in conjunction with long queries, and it
cleans up some bogosity in connection with long EXPLAIN texts anyway.
Note that current libpq will accept any length error message (at least
until it runs out of memory); prior versions have a limit of 8K, but
will cleanly discard excess error text, so there shouldn't be any
big compatibility problems with old clients.
1999-09-11 19:06:42 +00:00