flawed in the following ways:
1. Only returned columns that had a default value defined, rather than all
columns in a table
2. Used 2 * N + 1 queries to find out attributes, comments and typenames
for N columns.
By using some outer join syntax it is possible to retrieve all necessary
information in just one SQL statement. This means this version is only
suitable for PostgreSQL >= 7.1. Don't know whether that's a problem.
I've tested this function with current sources and 7.1.3 and patched both
jdbc1 and jdbc2. I haven't compiled nor tested the jdbc1 version though, as
I have no JDK 1.1 available.
Note the discussion in http://fts.postgresql.org/db/mw/msg.html?mid=1029626
regarding differences in obtaining comments on database object in 7.1 and
7.2. I was unable to use the following syntax (or similar ones):
select
...,
description
from
...
left outer join col_description(a.attrelid, a.attnum) description
order by
c.relname, a.attnum;
(the error was parse error at or near '(') so I had to paste the actual
code for the col_description function into the left outer join. Maybe
someone who is more knowledgable about outer joins might provide me with a
better SQL statement.
Jeroen van Vianen
the JDBC driver.
I've done this by extracting it into a new method object called
QueryExecutor (should go into org/postgresql/core/) and then taking it
apart into different methods in that class.
A short summary:
* Extracted ExecSQL() from Connection into a method object called
QueryExecutor.
* Moved ReceiveFields() from Connection to QueryExecutor.
* Extracted parts of the original ExecSQL() method body into smaller
methods on QueryExecutor.
* Bug fix: The instance variable "pid" in Connection was used in two
places with different meaning. Both were probably in dead code, but it's
fixed anyway.
Anders Bengtsson
for the changed files and a few new files:
- test/jdbc2/BatchExecuteTest.java
- util/MessageTranslator.java
- jdbc2/PBatchUpdateException.java
As an aside, is this the best way to submit a patch consisting
of both changed and new files? Or is there a smarter cvs command
which gets them all in one patch file?
This patch fixes batch processing in the JDBC driver to be
JDBC-2 compliant. Specifically, the changes introduced by this
patch are:
1) Statement.executeBatch() no longer commits or rolls back a
transaction, as this is not prescribed by the JDBC spec. Its up
to the application to disable autocommit and to commit or
rollback the transaction. Where JDBC talks about "executing the
statements as a unit", it means executing the statements in one
round trip to the backend for better performance, it does not
mean executing the statements in a transaction.
2) Statement.executeBatch() now throws a BatchUpdateException()
as required by the JDBC spec. The significance of this is that
the receiver of the exception gets the updateCounts of the
commands that succeeded before the error occurred. In order for
the messages to be translatable, java.sql.BatchUpdateException
is extended by org.postgresql.jdbc2.PBatchUpdateException() and
the localization code is factored out from
org.postgresql.util.PSQLException to a separate singleton class
org.postgresql.util.MessageTranslator.
3) When there is no batch or there are 0 statements in the batch
when Statement.executeBatch() is called, do not throw an
SQLException, but silently do nothing and return an update count
array of length 0. The JDBC spec says "Throws an SQLException if
the driver does not support batch statements", which is clearly
not the case. See testExecuteEmptyBatch() in
BatchExecuteTest.java for an example. The message
postgresql.stat.batch.empty is removed from the language
specific properties files.
4) When Statement.executeBatch() is performed, reset the
statement's list of batch commands to empty. The JDBC spec isn't
100% clear about this. This behaviour is only documented in the
Java tutorial
(http://java.sun.com/docs/books/tutorial/jdbc/jdbc2dot0/batchupdates.html).
Note that the Oracle JDBC driver also resets the statement's
list in executeBatch(), and this seems the most reasonable
interpretation.
5) A new test case is added to the JDBC test suite which tests
various aspects of batch processing. See the new file
BatchExecuteTest.java.
Regards,
Ren? Pijlman
system. Some systems did not understand the 'l' section, and in general
it wasn't entirely appropriate.
On SCO OpenServer, the man pages won't be installed at all until someone
figures out their man system.
longer compiles, due to objects being referenced in this patch that do
not exist in JDK1.1.
Barry Lind
---------------------------------------------------------------------------
The JDBC driver requires
permission java.net.SocketPermission "host:port", "connect";
in the policy file of the application using the JDBC driver
in the postgresql.jar file. Since the Socket() call in the
driver is not protected by AccessController.doPrivileged() this
permission must also be granted to the entire application.
>>>>
>>>> permission java.net.SocketPermission "host:port", "connect";
>>>>
>>>>in the policy file of the application using the JDBC driver
>>>>in the postgresql.jar file. Since the Socket() call in the
>>>>driver is not protected by AccessController.doPrivileged() this
>>>>permission must also be granted to the entire application.
>>>>
>>>>The attached diff fixes it so that the connect permission can be
>>>>restricted just the the postgresql.jar codeBase if desired.
David Daney
org.postgresql.util.Serialize and org.postgresql.jdbc2.PreparedStatement
that fixes the ability to "serialize" a simple java class into a
postgres table.
The current cvs seems completely broken in this support, so the patch
puts it into working condition, granted that there are many limitations
with serializing java classes into Postgres.
The code to do serialize appears to have been in the driver since
Postgres 6.4, according to some comments in the source. My code is not
adding any totally new ability to the driver, rather just fixing what
is there so that it actually is usable. I do not think that it should
affect any existing functions of the driver that people regularly
depend on.
The code is activated if you use jdbc2.PreparedStatement and try to
setObject some java class type that is unrecognized, like not String or
not some other primitive type. This will cause a sequence of function
calls that results in an instance of Serialize being instantiated for
the class type passed. The Serialize constructor will query pg_class
to see if it can find an existing table that matches the name of the
java class. If found, it will continue and try to use the table to
store the object, otherwise an SQL exception is thrown and no harm is
done. Serialize.create() has to be used to setup the table for a java
class before anything can really happen with this code other than an
SQLException (unless by some freak chance a table exists that it thinks
it can use).
I saw a difference in Serialize.java between 7.1.3 and 7.2devel that I
didn't notice before, so I had to redo my changes from the 7.2devel
version (why I had to resend this patch now). I was missing the
fixString stuff, which is nice and is imporant to ensure the inserts
will not fail due to embedded single quote or unescaped backslashes. I
changed that fixString function in Serialize just a little since there
is no need to muddle with escaping newlines: only escaping single quote
and literal backslashes is needed. Postgres appears to insert newlines
within strings without trouble.
This patch moves the logic that looks up TypeOid, PGTypeName, and
SQLTypeName from Field to Connection. It is moved to connection since
it needs to differ from the jdbc1 to jdbc2 versions and Connection
already has different subclasses for the two driver versions. It also
made sense to move the logic to Connection as some of the logic was
already there anyway.
Barry Lind
following email.
> > The problem: When I call getBigDecimal() on a ResultSet, it
> > sometimes throws an exception:
> >
> > Bad BigDecimal 174.50
> > at org.postgresql.jdbc2.ResultSet.getBigDecimal(ResultSet.java:373)
> > at org.postgresql.jdbc2.ResultSet.getBigDecimal(ResultSet.java:984)
> > ...blah blah blah...
> > org.postgresql.util.PSQLException: Bad BigDecimal 174.50
Barry Lind
so it may be a transit problem. Also removed the 'txt' suffix
in case that was confusing some transport layer trying to be
too inteligent for our own good.
This may have been because the Array.java class from the
previous patch didn't seem to have made it into the snapshot
build for some reason. This patch should at least fix that issue.
Greg Zoller
> Shouldn't
>
> throw new PSQLException("metadata unavailable");
>
> in getTypeInfo() be something like:
>
> throw new PSQLException("postgresql.meta.unavailable");
>
> to allow translation of the error message in the
> errors*.properties files?
You're right. Attached is an updated patch that also includes a message
in error.properties. I've attempted a French message in
errors_fr.properties but beware that I haven't written French in quite a
few years. Don't know Italian, German, or Dutch so I can't do those.
Liam Stewart
attempt at a patch to 7.1.2 to support Array.
[I think I've solved the mangled patch problem. Hotmail seems to
try to format the text file, so gzipping it should solve this
problem.]
In this patch I've incorporated Barry's feedback. Specifically:
1) OIDs are no longer hard-coded into Array.java. In order to
support this change I added a getOID(String) method to Field.java
which receives a PostgreSQL field type and returns a value from
java.sql.Types. I couldn't get away from using OIDs altogether
because the JDBC spec for Array specifies that some methods return
a ResultSet. This requires I construct Field objects,
which means I need OIDs. At least this approach doesn't hard
code these values. A Hashtable cache has been added to Field
so that an SQL lookup isn't necessary (following the model already
in Field.java).
2) Rewired the base formatting code in ResultSet.java to use 'to'
methods, which are then exposed as static methods in ResultSet.
These methods are used in Array to format the data without
duplications in the code.
3) Artifact call to first() in ResultSet.getArray() removed.
Greg Zoller
includes two changes in the JDBC driver:
1) When connected to a backend >= 7.2: use obj_description() and
col_description() instead of direct access to pg_description.
2) In DatabaseMetaData.getTables()/getColumns()/getProcedures():
when there is no comment on the object, return null in the
REMARKS column of the ResultSet, instead of the default string
"no remarks".
Change 2 first appeared as a side-effect of change 1, but it is
actually more compliant with the JDBC spec: "String object
containing an explanatory comment on the table/column/procedure,
which may be null". The default string "no remarks" was strictly
speaking incorrect, as it could not be distinguished from a real
user comment "no remarks". So I removed the default string
completely.
Change 2 might break existing code that doesn't follow the JDBC
spec and isn't prepared to handle a null in the REMARKS column
of getTables()/getColumns()/getProcedures.
Patch tested with jdbc2 against both a 7.1 and a CVS tip
backend. I did not have a jdbc1 environment to build and test
with, but since the touched code is identical in jdbc1 and jdbc2
I don't foresee any problems.
Regards,
Ren? Pijlman
* Merges identical code from org.postgresql.jdbc[1|2].Statement into
org.postgresql.Statement.
* Moves escapeSQL() method from Connection to Statement (the only place
it's used)
* Minor cleanup of the new isolation level stuff.
* Minor cleanup of version string handling.
Anders Bengtsson
Here is a context diff from latest cvs
And I see why you couldn't apply the last diff, the setCatalog diff has
been backed out, that was causing the compile problem in the first
place.
This following one needs to be applied to allow the current cvs to
compile
Dave Cramer
1) improves performance of commit/rollback by reducing number of round
trips to the server
2) uses 7.1 functionality for setting the transaction isolation level
3) backs out a patch from 11 days ago because that code failed to
compile under jdk1.1
Details:
1) The old code was doing the following for each commit:
commit
begin
set transaction isolation level xxx
thus a call to commit was performing three round trips to the database.
The new code does this in one round trip as:
commit; begin; set transaction isolation level xxx
In a simple test program that performs 1000 transactions (where each
transaction does one simple select inside that transaction) has the
following before and after timings:
Client and Server on same machine
old new
--- ---
1.877sec 1.405sec 25.1% improvement
Client and Server on different machines
old new
--- ---
4.184sec 2.927sec 34.3% improvement
(all timings are an average of four different runs)
2) The driver was using 'set transaction isolation level xxx' at the
begining of each transaction, instead of using the new 7.1 syntax of
'set session characteristics as transaction isolation level xxx' which
only needs to be done once instead of for each transaction. This is
done conditionally (i.e. if server is 7.0 or older do the old behaviour,
else do the new behaviour) to not break backward compatibility. This
also required the movement of some code to check/test database version
numbers from the DatabaseMetaData object to the Connection object.
3) Finally while testing, I discovered that the code that was checked in
11 days ago actually didn't compile. The code in the patch for
Connection.setCatalog() used Properties.setProperty() which only exists
in JDK1.2 or higher. Thus compiling the JDBC1 driver failed as this
method doesn't exist. Thus I backed out that patch.
Barry Lind
connection implementations (org.postgresql.jdbc[1|2].Connection) into
their superclass (org.postgresql.Connection).
It also changes the close() methods of Connection and PG_Stream, so that
PG_Stream no longer is responsible for sending the termination packet 'X'
to the backend. I figured that protocol-level stuff like that belonged in
Connection more than in PG_Stream.
Anders Bengtsson
in Connection - note: I've updated setCatalog(String catalog) from my previous
diff so it checks whether it is already connected to the specified catalog.
Jason Davies
Here's a patch against the current CVS. The changes from the previous
patch are mostly related to the changed interface for PG_Stream.
Anders Bengtsson
null terminated strings. The FE/BE protocol sends in some cases null
terminated strings to the client. The docs for the FE/BE protocol state
that there is no limit on the size of a null terminated string sent to
the client and a client should be coded using an expanding buffer to
deal with large strings. The old code did not do this and gave an error
if a null terminated string was greater than either 4 or 8K. It appears
that with the advent of TOAST very long SQL statements are becoming more
common, and apparently some error messages from the backend include the
SQL statement thus easily exceeding the 8K limit in the old code.
In fixing I also cleaned up some calls in the JDBC fastpath code that
were not doing character set conversion under multibyte, and removed
some methods that were no longer needed. I also removed a potential
threading problem with a shared variable that was being used in
Connection.java.
Thanks to Steve Wampler for discovering the problem and sending the
initial diffs that were the basis of this patch.
thanks,
--Barry
* NULLs are sorted differently in 7.2
* table correlation names are supported
* GROUP BY, ORDER BY unrelated is supported since 6.4
* ESCAPE/LIKE only supported since 7.1
* outer joins only since 7.1
* preferred term for procedure is "function"
* preferred term for catalog is "database"
* supports SELECT for UPDATE since 6.5
* supports subqueries
* supports UNION; supports UNION ALL since 7.1
* update some of the max lengths to match reality
* rearrange some functions to match the order in the spec
for easier maintenance
object inside the initialization section instead of doing it everytime
the setTimestamp method is called. Thanks to Dave Harkness for this
suggestion.
Barry Lind
submit. These were done for the jdbc2 driver. The first one is for support
of the Types.BIT in the PreparedStatement class. The following lines need to be
inserted in the switch statment, at around line 530:
(Prepared statment, line 554, before the default: switch
case Types.BIT:
if (x instanceof Boolean) {
set(parameterIndex, ((Boolean)x).booleanValue() ? "TRUE" : "FALSE");
} else {
throw new PSQLException("postgresql.prep.type");
}
break;
The second one is dealing with blobs,
inserted in PreparedStatemant.java (After previous patch line, 558):
case Types.BINARY:
case Types.VARBINARY:
setObject(parameterIndex,x);
break;
and in ResultSet.java (Around line 857):
case Types.BINARY:
case Types.VARBINARY:
return getBytes(columnIndex);
Ned Wolpert <ned.wolpert@knowledgenet.com>
non-multibyte database loosing 8bit characters. This patch will cause
the jdbc driver to ignore the encoding reported by the database when
multibyte isn't enabled and use the JVM default in that case.
Barry Lind
database, and often need the latest timestamp, but want to
format it as a date. With 7.0.x, I just
select ts from foo order by ts desc limit 1
and in java: d = res.getDate(1);
but this fails everywhere in my code now :(
http://java.sun.com/j2se/1.3/docs/guide/jdbc/spec/jdbc-spec.frame7.html
says
The ResultSet.getXXX methods will attempt to
convert whatever SQL type was returned by the
database to whatever Java type is returned by
the getXXX method.
Palle Girgensohn