Thread: JDBC 2.0 conformance, documentation and todo list
Hello, I've created a web page that aims to document the level of conformance of the JDBC driver to the JDBC 2.0 API. It also aims to document any deviations from the JDBC standard that have been found. http://lab.applinet.nl/postgresql-jdbc/ No need to say its still far from complete. I just wrote the sections on Array and Batch Updates. I'm volunteering to finish and maintain this page if people find it useful. Any comments? Any feedback or additions will be greatly appreciated. If you know about a JDBC 2.0 feature that seems to be missing or a feature you know is implemented, please post a message on this list. The reason I created this page is that I wanted to work on JDBC 2.0 conformance, and except for Blob/Clob support Barry Lind wrote about a couple of days ago, it wasn't clear what needed to be done. Investigating and documenting the conformance level and known deficiencies seemed like a good place to start. Regards, René Pijlman
Rene, Have your run the JDBC conformance tests to help generate this information? There was a post a few days ago from one of the RedHat engineers saying they had run it and were going to summarize the results and post back to the list. (I haven't yet seen that followup). I ran the tests myself over the weekend, but haven't sorted through the results either. If you want I could send you the log from that run. thanks, --Barry Rene Pijlman wrote: > Hello, > > I've created a web page that aims to document the level of > conformance of the JDBC driver to the JDBC 2.0 API. It also aims > to document any deviations from the JDBC standard that have been > found. > http://lab.applinet.nl/postgresql-jdbc/ > > No need to say its still far from complete. I just wrote the > sections on Array and Batch Updates. I'm volunteering to finish > and maintain this page if people find it useful. Any comments? > > Any feedback or additions will be greatly appreciated. If you > know about a JDBC 2.0 feature that seems to be missing or a > feature you know is implemented, please post a message on this > list. > > The reason I created this page is that I wanted to work on JDBC > 2.0 conformance, and except for Blob/Clob support Barry Lind > wrote about a couple of days ago, it wasn't clear what needed to > be done. Investigating and documenting the conformance level and > known deficiencies seemed like a good place to start. > > Regards, > René Pijlman > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/users-lounge/docs/faq.html > >
On Wed, 08 Aug 2001 09:50:11 -0700, you wrote: >Have your run the JDBC conformance tests to help generate this >information? Yes, I have run the test (1.2.1 EE version) and looked at some of the results. This will take more time, since there are hundreds of messages and it isn't always easy to find out what's going on. Unfortunately, I'm getting many failed results with messages like this: ******************************************************************************** Beginning Test: testAllProceduresAreCallable ******************************************************************************** Using DataSource ERROR: SQL Exception : No suitable driver Calling allProceduresAreCallable on DatabaseMetaData ERROR: Unexpected exception null ERROR: Call to allProceduresAreCallable is Failed! ERROR: java.lang.NullPointerException And I'm quite sure this code in DatabaseMetaData isn't causing the problem :-) public boolean allProceduresAreCallable() throws SQLException { return true; } >There was a post a few days ago from one of the RedHat >engineers saying they had run it and were going to summarize the results >and post back to the list. (I haven't yet seen that followup). Would be good to here from them. >I ran the tests myself over the weekend, but haven't sorted >through the results either. Could you do a quick grep for the message quoted above? I'd really like to know if this is caused by a problem in my setup... thanks. >Rene Pijlman wrote: >> I've created a web page that aims to document the level of >> conformance of the JDBC driver to the JDBC 2.0 API. It also aims >> to document any deviations from the JDBC standard that have been >> found. >> http://lab.applinet.nl/postgresql-jdbc/ Regards, René Pijlman
At 02:55 PM 8/8/2001, Rene Pijlman wrote: >Unfortunately, I'm getting many failed results with messages >like this: > >******************************************************************************** >Beginning Test: testAllProceduresAreCallable >******************************************************************************** >Using DataSource >ERROR: SQL Exception : No suitable driver >Calling allProceduresAreCallable on DatabaseMetaData >ERROR: Unexpected exception null >ERROR: Call to allProceduresAreCallable is Failed! >ERROR: java.lang.NullPointerException The first error tells me the JAR with the driver is not in your classpath? The manager cannot find the PostgreSQL JDBC driver, and thus you are getting a null connection object from DriverManager.getConnection(). The second error then is a result of doing result = conn.allProceduresAreCallable(); with conn == null. Peace, Dave
On Wed, 08 Aug 2001 15:11:03 -0700, Dave Harkness wrote: >>Unfortunately, I'm getting many failed results with messages >>like this: >>ERROR: SQL Exception : No suitable driver >>ERROR: java.lang.NullPointerException > >The first error tells me the JAR with the driver is not in your classpath? >The manager cannot find the PostgreSQL JDBC driver, and thus you are >getting a null connection object from DriverManager.getConnection(). Yes, but the same test suite has hundreds of tests that *do* succeed. I can't imagine the driver is flipping in and out of the classpath :-) But I'll doublecheck. Thanks. Perhaps it has something to do with connection pooling (these tests run in J2EE EJB's)... In case someone wants to have a look at it, the (huge) log file is available on http://lab.applinet.nl/postgresql-jdbc/PostgreSQL_7.1.2_CTS_1.2.1a_jdbc-tests_log.txt (6.4 MB). And the test suite can be downloaded from http://java.sun.com/products/jdbc/jdbctestsuite-1_2_1.html. It requires a J2EE platform, such as Sun's reference implementation in the J2EE SDK. Regards, René Pijlman
My guess is that is is a setup problem. I get a success for this test. --Barry ******************************************************************************** Beginning Test: testAllProceduresAreCallable ******************************************************************************** SVR: Using DataSource SVR: intTabSize: 5 SVR: intTabTypeSize: 5 SVR: createString1: create table ctstable1 (TYPE_ID int, TYPE_DESC varchar(32), primary key(TYPE_ID)) SVR: createString: create table ctstable2 (KEY_ID int, COF_NAME varchar(32), PRICE float, TYPE_ID int, primary key(KEY_ID), foreign key(TYPE_ID) references ctstable1) SVR: Created the tables ctstable1 and ctstable2 SVR: Calling allProceduresAreCallable on DatabaseMetaData SVR: allProceduresAreCallable method called by the current user SVR: Removed the tables ctstable1 and ctstable2 SVR: Closed the database connection SVR: Cleanup ok; SVR: Test running in ejb vehicle passed Test: testAllProceduresAreCallable returned from running in ejb vehicle ******************************************************************************** End Test: testAllProceduresAreCallable...........PASSED ******************************************************************************** Rene Pijlman wrote: > On Wed, 08 Aug 2001 09:50:11 -0700, you wrote: > >>Have your run the JDBC conformance tests to help generate this >>information? >> > > Yes, I have run the test (1.2.1 EE version) and looked at some > of the results. This will take more time, since there are > hundreds of messages and it isn't always easy to find out what's > going on. > > Unfortunately, I'm getting many failed results with messages > like this: > > ******************************************************************************** > Beginning Test: testAllProceduresAreCallable > ******************************************************************************** > Using DataSource > ERROR: SQL Exception : No suitable driver > Calling allProceduresAreCallable on DatabaseMetaData > ERROR: Unexpected exception null > ERROR: Call to allProceduresAreCallable is Failed! > ERROR: java.lang.NullPointerException > > And I'm quite sure this code in DatabaseMetaData isn't causing > the problem :-) > > public boolean allProceduresAreCallable() throws SQLException > { > return true; > } > > >>There was a post a few days ago from one of the RedHat >>engineers saying they had run it and were going to summarize the results >>and post back to the list. (I haven't yet seen that followup). >> > > Would be good to here from them. > > >>I ran the tests myself over the weekend, but haven't sorted >>through the results either. >> > > Could you do a quick grep for the message quoted above? I'd > really like to know if this is caused by a problem in my > setup... thanks. > > >>Rene Pijlman wrote: >> >>>I've created a web page that aims to document the level of >>>conformance of the JDBC driver to the JDBC 2.0 API. It also aims >>>to document any deviations from the JDBC standard that have been >>>found. >>>http://lab.applinet.nl/postgresql-jdbc/ >>> > > Regards, > René Pijlman > > ---------------------------(end of broadcast)--------------------------- > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org > >
Rene, First off, thank you for pulling this information together in one place. It is really appreciated. I was going through your list of issues and I had the following comments to add: Batch Updates The current implementation is poor. As you point out the implementation of storing up the statements and then executing them one by one defeats the purpose of the batch methods. The intended behaviour is to send a set of updates/inserts in one round trip to the database. The server does support this functionality (you can send multiple statements in one call by using a semicolon as a statement separator). The server will then execute them all at once. The one limitation is that the oid/row count returned by such a batch update only reflects the oid/row count of the last statement in the batch. In reading the spec this behaviour is in conformance if not ideal. DatabaseMetaData getDatabaseProductVersion - I get a pass on this test when I run. supportsANSI92EntryLevelSQL - Since postgres now does support outer joins, I think the answer here should be yes. I think the general feeling is that if there is a deviation from entry level SQL92 it is a bug. PreparedStatement The bytea type is documented for 7.2. You can see it in the current docs off of the developers corner links. The driver does implement setBlob the same way as setBinaryStream. In fact it uses setBinaryStream in it's implementation. I believe that setBlob is functionally correct in it's assumptions that the underlying type is oid and thus a LargeObject. General Requirements ODBC escape processing is minimally handled. The escapes for date format are supported, but not the rest. thanks, --Barry Rene Pijlman wrote: > Hello, > > I've created a web page that aims to document the level of > conformance of the JDBC driver to the JDBC 2.0 API. It also aims > to document any deviations from the JDBC standard that have been > found. > http://lab.applinet.nl/postgresql-jdbc/ > > No need to say its still far from complete. I just wrote the > sections on Array and Batch Updates. I'm volunteering to finish > and maintain this page if people find it useful. Any comments? > > Any feedback or additions will be greatly appreciated. If you > know about a JDBC 2.0 feature that seems to be missing or a > feature you know is implemented, please post a message on this > list. > > The reason I created this page is that I wanted to work on JDBC > 2.0 conformance, and except for Blob/Clob support Barry Lind > wrote about a couple of days ago, it wasn't clear what needed to > be done. Investigating and documenting the conformance level and > known deficiencies seemed like a good place to start. > > Regards, > René Pijlman > > ---------------------------(end of broadcast)--------------------------- > TIP 5: Have you checked our extensive FAQ? > > http://www.postgresql.org/users-lounge/docs/faq.html > >
Hello Barry, Thanks a lot. I've incorporated your items on http://lab.applinet.nl/postgresql-jdbc/ I'll also incorporate your postings from last week about bytea, large objects and such. Regards, René Pijlman On Wed, 08 Aug 2001 20:56:05 -0700, you wrote: >Rene, > >First off, thank you for pulling this information together in one place. > It is really appreciated. > >I was going through your list of issues and I had the following comments >to add: > >Batch Updates > The current implementation is poor. As you point out the >implementation of storing up the statements and then executing them one >by one defeats the purpose of the batch methods. The intended behaviour >is to send a set of updates/inserts in one round trip to the database. >The server does support this functionality (you can send multiple >statements in one call by using a semicolon as a statement separator). >The server will then execute them all at once. The one limitation is >that the oid/row count returned by such a batch update only reflects the >oid/row count of the last statement in the batch. In reading the spec >this behaviour is in conformance if not ideal. > >DatabaseMetaData > >getDatabaseProductVersion - I get a pass on this test when I run. >supportsANSI92EntryLevelSQL - Since postgres now does support outer >joins, I think the answer here should be yes. I think the general >feeling is that if there is a deviation from entry level SQL92 it is a bug. > > >PreparedStatement > >The bytea type is documented for 7.2. You can see it in the current >docs off of the developers corner links. > >The driver does implement setBlob the same way as setBinaryStream. In >fact it uses setBinaryStream in it's implementation. I believe that >setBlob is functionally correct in it's assumptions that the underlying >type is oid and thus a LargeObject. > >General Requirements > >ODBC escape processing is minimally handled. The escapes for date >format are supported, but not the rest. > > >thanks, >--Barry
I'm still here :) There have been a lot of holidays here so I'm starting to go through the JDBC test results now. Hopefully it won't take too long, but there are a lot of results. In any case, I've attached a quick list of failures for reference and/or comparison. Tests that failed multiple time are only listed once. There's no context as to which group the tests fail in etc. so it's more for just a bird's-eye view of what's not working. As for your problem, Rene, I concur that it is likely a problem with your setup as a grep on three test logs does not turn up any failures like the ones you're getting. Liam -- Liam Stewart :: Red Hat Canada, Ltd. :: liams@redhat.com
Ooops.. hit send there too quickly and didn't attach that list. Here it is. Liam -- Liam Stewart :: Red Hat Canada, Ltd. :: liams@redhat.com
Attachment
> Hello Barry, > > Thanks a lot. I've incorporated your items > on http://lab.applinet.nl/postgresql-jdbc/ > > I'll also incorporate your postings from last week about bytea, > large objects and such. Wow, this looks great. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000 + If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania 19026
Todays impossible question =:-D I really need an ARRAY but I am stuck with JDBC 1.0. the table looks like this festival | filmid ------------------------------------------ 1979 | 102, 103, 104 1980 | 258, 369, 489, 568 ad nauseum How can I simulate an ARRAY of this type? If I can't - where is the nearest bridge? Cheers Tony Grant -- RedHat Linux on Sony Vaio C1XD/S http://www.animaproductions.com/linux2.html Macromedia UltraDev with PostgreSQL http://www.animaproductions.com/ultra.html
Tony, Offhand, I would say you need to wrap this with some code which takes an array, and stores it comma separated, and vice versa. This is a pretty simple solution, is there something I am missing? Dave -----Original Message----- From: pgsql-jdbc-owner@postgresql.org [mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Tony Grant Sent: August 9, 2001 10:52 AM To: pgsql-jdbc@postgresql.org Subject: [JDBC] tough one Todays impossible question =:-D I really need an ARRAY but I am stuck with JDBC 1.0. the table looks like this festival | filmid ------------------------------------------ 1979 | 102, 103, 104 1980 | 258, 369, 489, 568 ad nauseum How can I simulate an ARRAY of this type? If I can't - where is the nearest bridge? Cheers Tony Grant -- RedHat Linux on Sony Vaio C1XD/S http://www.animaproductions.com/linux2.html Macromedia UltraDev with PostgreSQL http://www.animaproductions.com/ultra.html ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
On 09 Aug 2001 11:24:20 -0400, Dave Cramer wrote: > Offhand, I would say you need to wrap this with some code which takes an > array, and stores it comma separated, and vice versa. > > This is a pretty simple solution, is there something I am missing? That's what I'm looking at now. Trying to get some functions worked out that read the array looking for the right values. I am still getting up to speed on queries that are more complicated than select x from z where y=var stuff. But I must admit I am getting there slowly but surely. Thanks for pointing me in the right direction. Cheers Tony -- RedHat Linux on Sony Vaio C1XD/S http://www.animaproductions.com/linux2.html Macromedia UltraDev with PostgreSQL http://www.animaproductions.com/ultra.html
Barry Lind writes: > supportsANSI92EntryLevelSQL - Since postgres now does support outer > joins, I think the answer here should be yes. I think the general > feeling is that if there is a deviation from entry level SQL92 it is a bug. Outer joins are not required for entry level SQL. Nevertheless, PostgreSQL does not and probably will never comply with entry level SQL to the letter, so the answer "false" is correct. -- Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter
Peter Eisentraut <peter_e@gmx.net> writes: > Barry Lind writes: >> supportsANSI92EntryLevelSQL - Since postgres now does support outer >> joins, I think the answer here should be yes. I think the general >> feeling is that if there is a deviation from entry level SQL92 it is a bug. > Outer joins are not required for entry level SQL. Nevertheless, > PostgreSQL does not and probably will never comply with entry level SQL to > the letter, so the answer "false" is correct. I think the biggest remaining shortcoming compared to "entry SQL92" is lack of schema support. I haven't groveled through the spec in detail to see what else is missing, however. (Once we have schemas, we can try to run the NIST compliance tests and see what they complain of...) regards, tom lane
I would also be willing to bet that most other databases don't support entry level sql to the letter either (look at how Oracle treats the empty string (i.e. '') as null). But I bet Oracle claims they support entry level SQL in their JDBC driver since it is a requirement for J2EE. I don't see any problem with claiming support for entry level even though there are a few exceptions. thanks, --Barry Peter Eisentraut wrote: > Barry Lind writes: > > >>supportsANSI92EntryLevelSQL - Since postgres now does support outer >>joins, I think the answer here should be yes. I think the general >>feeling is that if there is a deviation from entry level SQL92 it is a bug. >> > > Outer joins are not required for entry level SQL. Nevertheless, > PostgreSQL does not and probably will never comply with entry level SQL to > the letter, so the answer "false" is correct. > >
Barry Lind writes: > I don't see any problem with claiming support for entry level even > though there are a few exceptions. I might be able to agree if the only exceptions were that we fold identifier case the wrong way and have different reserved words, but certain things like schemas are rather largish exceptions. I'm not really sure what the point of this function is anyway. We could leave it as "false" perpetually as a means of protest against the contortions of the SQL standard. ;-) -- Peter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter
OK, I didn't realize that schema support was part of entry level. Given that it is, I agree that 'false' is a better response at this time. However after schema support is added in 7.3 (hopefully), then I would like this to be 'true'. Or are their other large features like this missing from postgres that are part of entry level 92 that I am not aware of? I will submit a patch to add comments to the code reflecting this discussion. thanks, --Barry Peter Eisentraut wrote: > Barry Lind writes: > > >> I don't see any problem with claiming support for entry level even >>though there are a few exceptions. >> > > I might be able to agree if the only exceptions were that we fold > identifier case the wrong way and have different reserved words, but > certain things like schemas are rather largish exceptions. > > I'm not really sure what the point of this function is anyway. We could > leave it as "false" perpetually as a means of protest against the > contortions of the SQL standard. ;-) > >
At 16:51 2001/08/09 +0200, Tony Grant wrote: >festival | filmid >------------------------------------------ >1979 | 102, 103, 104 >1980 | 258, 369, 489, 568 > >How can I simulate an ARRAY of this type? If I can't - where is the >nearest bridge? you could define a table with this data: festival | filmid ------------------------------------------ 1979 | 102 1979 | 103 1979 | 104 1980 | 258 1980 | 369 1980 | 489 1980 | 568 with a query like this: select filmid from films where festival = '1980' and use a loop like this to generate an array Vector films = new Vector(); while ( rs.next() ) { films.add ( rs.getString ( "filmid" ) ); } bye John