Thread: JDBC 2.0 Compatibility?

JDBC 2.0 Compatibility?

From
"Renaud Waldura"
Date:
Is there a document somewhere that describes the level of conformance of the
PostgreSQL JDBC driver? Specifically, I'd like to know if/how the more
advanced features of JDBC are implemented -- e.g. scrollable/updatable
result sets, fetchSize, prepared statements, batch updates, etc.

I guess some of these features can only be correctly implemented by the JDBC
driver if the underlying database connector supports them. Is that
documented? I'm using PG 7.3. I already went through the manual, maybe I've
missed it?

Thanks for any pointers!



Re: JDBC 2.0 Compatibility?

From
Dave Cramer
Date:
Not sure if there is a page but to answer your question
 Yes to all

Dave

On Mon, 2003-11-03 at 13:05, Renaud Waldura wrote:
> Is there a document somewhere that describes the level of conformance of the
> PostgreSQL JDBC driver? Specifically, I'd like to know if/how the more
> advanced features of JDBC are implemented -- e.g. scrollable/updatable
> result sets, fetchSize, prepared statements, batch updates, etc.
>
> I guess some of these features can only be correctly implemented by the JDBC
> driver if the underlying database connector supports them. Is that
> documented? I'm using PG 7.3. I already went through the manual, maybe I've
> missed it?
>
> Thanks for any pointers!
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
>                http://archives.postgresql.org
>
>


Re: JDBC 2.0 Compatibility?

From
"Renaud Waldura"
Date:
Thanks. I guess part of my question is about how these features are
implemented.

E.g. I tested scrollable result sets with the 7.3.4 driver, and I got the
feeling it's implemented by reading all rows into memory first... which
would negate some of the benefits. I thought it would behave more kind of
like LIMIT OFFSET, bringing back pages of data when needed. Did I do
something wrong or is this expected behavior?

Same with batch updates (which I didn't test) : they can be emulated by the
driver, with no performance benefits, or they can be  implemented "for
real". Same with prepared statements.

Maybe what I'm after is a performance document specifically for the PG JDBC
driver.



----- Original Message -----
From: "Dave Cramer" <pg@fastcrypt.com>
To: "Renaud Waldura" <renaud+pgsql@waldura.com>
Cc: <pgsql-jdbc@postgresql.org>
Sent: Monday, November 03, 2003 10:46 AM
Subject: Re: [JDBC] JDBC 2.0 Compatibility?


> Not sure if there is a page but to answer your question
>  Yes to all
>
> Dave
>
> On Mon, 2003-11-03 at 13:05, Renaud Waldura wrote:
> > Is there a document somewhere that describes the level of conformance of
the
> > PostgreSQL JDBC driver? Specifically, I'd like to know if/how the more
> > advanced features of JDBC are implemented -- e.g. scrollable/updatable
> > result sets, fetchSize, prepared statements, batch updates, etc.
> >
> > I guess some of these features can only be correctly implemented by the
JDBC
> > driver if the underlying database connector supports them. Is that
> > documented? I'm using PG 7.3. I already went through the manual, maybe
I've
> > missed it?
> >
> > Thanks for any pointers!
> >
> >
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 6: Have you searched our list archives?
> >
> >                http://archives.postgresql.org
> >
> >
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
>       subscribe-nomail command to majordomo@postgresql.org so that your
>       message can get through to the mailing list cleanly
>



Large queries; fetchsize, cursors and limit/offset

From
"David Wall"
Date:
What are most people doing to handle queries with large result sets.  By
large, I mean results that have perhaps 200 or more rows and in which having
the full resultset would likely be a performance and memory issue.  Often,
we have queries that could return thousands of rows, especially those
performed by batch processing programs/threads.

It seems that the setFetchSize() is not well implemented across JDBC
platforms, and that the LIMIT/OFFSET combo, while workable despite the
potential for overhead on the database (since it returns only a subset, yet
the entire query must be evaluated) is not implemented everywhere, including
Oracle 8i if I'm not mistaken.

The other option appears to be to use cursors, though I'm not sure how well
that's supported across JDBC drivers either.  For sure, the syntax for
declaring and using a cursor vary among different database, though I can
work around that pretty easily.

What are most people doing these days for handling large queries, especially
those that have to support not only PostgreSQL, but also Oracle, DB2/UDB
and/or MSFT SQLServer?

Thanks,
David


Re: Large queries; fetchsize, cursors and limit/offset

From
Dave Tenny
Date:
The LIMIT/OFFSET technique, last time I checked, is a poor choice for
PostgreSQL
if that's your only way to filter the result set.  I did some performance
work some versions back and discovered that the server must still process
all those leading rows before offset (ultimately discarding them), and
as you increase the offsets
you're just reprocessing the same rows in increasing numbers over and over.

In other words, it isn't a pragmatic solution for really large result sets.
I'll be curious to hear other people's solutions here.

David Wall wrote:

>What are most people doing to handle queries with large result sets.  By
>large, I mean results that have perhaps 200 or more rows and in which having
>the full resultset would likely be a performance and memory issue.  Often,
>we have queries that could return thousands of rows, especially those
>performed by batch processing programs/threads.
>
>It seems that the setFetchSize() is not well implemented across JDBC
>platforms, and that the LIMIT/OFFSET combo, while workable despite the
>potential for overhead on the database (since it returns only a subset, yet
>the entire query must be evaluated) is not implemented everywhere, including
>Oracle 8i if I'm not mistaken.
>
>The other option appears to be to use cursors, though I'm not sure how well
>that's supported across JDBC drivers either.  For sure, the syntax for
>declaring and using a cursor vary among different database, though I can
>work around that pretty easily.
>
>What are most people doing these days for handling large queries, especially
>those that have to support not only PostgreSQL, but also Oracle, DB2/UDB
>and/or MSFT SQLServer?
>
>Thanks,
>David
>
>
>---------------------------(end of broadcast)---------------------------
>TIP 8: explain analyze is your friend
>
>
>


Re: Large queries; fetchsize, cursors and limit/offset

From
Rod Taylor
Date:
On Mon, 2003-11-03 at 17:53, Dave Tenny wrote:
> The LIMIT/OFFSET technique, last time I checked, is a poor choice for
> PostgreSQL

> In other words, it isn't a pragmatic solution for really large result sets.
> I'll be curious to hear other people's solutions here.

Cursors work in that you can ask for the first 100 rows, then the next
100, then the next 100.... It'll sit and wait for you to request the
next group.

The disadvantage is you need to get the same database connection over
and over for the same client. With web stuff that can be tricky.

If you can tie a db connection to a clients session (return it to the
pool when the client session expires) or have a single connection handle
all of these scroll type queries it becomes fairly painless.

With cursors that can cross transaction boundaries (7.4) it is just that
much easier.

Attachment

Re: Large queries; fetchsize, cursors and limit/offset

From
"David Wall"
Date:
Does anybody have some sample code that shows how to declare a simple cursor
and fetch it 100 rows at a time, for example?  I'm curious how to formulate
this.  It sounds like I need to keep the same Connection object, which is
good info to have.  How do I issue the DECLARE CURSOR and FETCH (using
PreparedStatement.execute() with the FETCH being an executeQuery() so that I
get a result set back)?

From what I gather, the SQL itself looks something like:

DECLARE my_cursor CURSOR FOR SELECT x,y,z FROM abc_table WHERE x>4;

FETCH 100 FROM my_cursor;

CLOSE cursor;

What do I call when I'm doing the DECLARE CURSOR, versus the FETCH versus
the CLOSE commands?

Does anybody know if this sort of code would then work in Oracle 8i if I
used a modified set of Oracle commands, i.e. something like:

DECLARE CURSOR my_cursor FOR SELECT x,y,z FROM abc_table WHERE x>4;
END;

FOR 100 FETCH my_cursor;  ???  No "host variables" with JDBC so I'm not sure
I can do this since the syntax implies an "INTO" clause for using host
variables.

Thanks,
David


Re: Large queries; fetchsize, cursors and limit/offset

From
Dave Cramer
Date:
The driver uses cursors now if you do two things

1) request must be made inside a transaction ( cursors for large result
sets should be anyway)
2) call setFetchSize()

Dave



On Mon, 2003-11-03 at 18:49, David Wall wrote:
> Does anybody have some sample code that shows how to declare a simple cursor
> and fetch it 100 rows at a time, for example?  I'm curious how to formulate
> this.  It sounds like I need to keep the same Connection object, which is
> good info to have.  How do I issue the DECLARE CURSOR and FETCH (using
> PreparedStatement.execute() with the FETCH being an executeQuery() so that I
> get a result set back)?
>
> >From what I gather, the SQL itself looks something like:
>
> DECLARE my_cursor CURSOR FOR SELECT x,y,z FROM abc_table WHERE x>4;
>
> FETCH 100 FROM my_cursor;
>
> CLOSE cursor;
>
> What do I call when I'm doing the DECLARE CURSOR, versus the FETCH versus
> the CLOSE commands?
>
> Does anybody know if this sort of code would then work in Oracle 8i if I
> used a modified set of Oracle commands, i.e. something like:
>
> DECLARE CURSOR my_cursor FOR SELECT x,y,z FROM abc_table WHERE x>4;
> END;
>
> FOR 100 FETCH my_cursor;  ???  No "host variables" with JDBC so I'm not sure
> I can do this since the syntax implies an "INTO" clause for using host
> variables.
>
> Thanks,
> David
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
>       subscribe-nomail command to majordomo@postgresql.org so that your
>       message can get through to the mailing list cleanly
>
>


Re: JDBC 2.0 Compatibility?

From
Oliver Jowett
Date:
On Mon, Nov 03, 2003 at 12:11:43PM -0800, Renaud Waldura wrote:
> Thanks. I guess part of my question is about how these features are
> implemented.
>
> E.g. I tested scrollable result sets with the 7.3.4 driver, and I got the
> feeling it's implemented by reading all rows into memory first... which
> would negate some of the benefits.

The driver loads the whole resultset into memory at once unless you set a
non-zero fetchsize on the statement, regardless of the resultset type.
If you set a non-zero fetchsize, it will use cursors (DECLARE/FETCH) to
process the results in blocks of 'fetchsize' rows at a time.

However using a non-zero fetchsize is broken with scrollable resultsets
(forward iteration works, but that's it) at the moment. This would take some
work to fix properly; I have a patch pending that "fixes" this by disabling
use of cursors if the resultset will be scrollable, but that obviously
doesn't help performance!

> Same with batch updates (which I didn't test) : they can be emulated by the
> driver, with no performance benefits, or they can be  implemented "for
> real". Same with prepared statements.

Batch updates are supported but give you no performance benefit at present
as they just execute each update individually. Improving this is on my radar
but might be a while yet.

Reusing prepared statements is slightly faster than constructing a new
statement each time. There are also methods on org.postgresql.PGstatement to
turn on use of server-side PREPARE/EXECUTE which has a more significant
effect. I have an old patch that automatically enables this after a certain
number of reuses of the statement, so you don't need to do anything
postgresql-specific to enable it, but it needs rework to apply to current
CVS -- and I'm not sure if there's consensus that this is the right thing to
do anyway.

> Maybe what I'm after is a performance document specifically for the PG JDBC
> driver.

There's nothing I know of along these lines.

-O

Re: Large queries; fetchsize, cursors and limit/offset

From
"David Wall"
Date:
Does that happen with the JDBC driver that comes with PG 7.3.4 or do I need
a more recent version?  Does it with JDK1.3.1?  Is this behavior standard
for PG JDBC (that is, will it be supported going forward as "the way" to
handle cursors via JDBC?)?  That certainly is a great way to implement
matters as far as I can tell.

Does anybody know if other JDBC drivers are going to do the same thing?  One
critical aspect for our application is that I also have to make it work
under Oracle 8i (and soon 9i and beyond) right now, with the probability
that we'd have to support DB2/UDB and MSFT SQLServer over time.

Thanks for this tip, though.

David

----- Original Message -----
From: "Dave Cramer" <pg@fastcrypt.com>
To: "David Wall" <d.wall@computer.org>
Cc: "Rod Taylor" <rbt@rbt.ca>; <pgsql-jdbc@postgresql.org>
Sent: Monday, November 03, 2003 4:01 PM
Subject: Re: [JDBC] Large queries; fetchsize, cursors and limit/offset


> The driver uses cursors now if you do two things
>
> 1) request must be made inside a transaction ( cursors for large result
> sets should be anyway)
> 2) call setFetchSize()
>
> Dave
>
>
>
> On Mon, 2003-11-03 at 18:49, David Wall wrote:
> > Does anybody have some sample code that shows how to declare a simple
cursor
> > and fetch it 100 rows at a time, for example?  I'm curious how to
formulate
> > this.  It sounds like I need to keep the same Connection object, which
is
> > good info to have.  How do I issue the DECLARE CURSOR and FETCH (using
> > PreparedStatement.execute() with the FETCH being an executeQuery() so
that I
> > get a result set back)?
> >
> > >From what I gather, the SQL itself looks something like:
> >
> > DECLARE my_cursor CURSOR FOR SELECT x,y,z FROM abc_table WHERE x>4;
> >
> > FETCH 100 FROM my_cursor;
> >
> > CLOSE cursor;
> >
> > What do I call when I'm doing the DECLARE CURSOR, versus the FETCH
versus
> > the CLOSE commands?
> >
> > Does anybody know if this sort of code would then work in Oracle 8i if I
> > used a modified set of Oracle commands, i.e. something like:
> >
> > DECLARE CURSOR my_cursor FOR SELECT x,y,z FROM abc_table WHERE x>4;
> > END;
> >
> > FOR 100 FETCH my_cursor;  ???  No "host variables" with JDBC so I'm not
sure
> > I can do this since the syntax implies an "INTO" clause for using host
> > variables.
> >
> > Thanks,
> > David
> >
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 3: if posting/reading through Usenet, please send an appropriate
> >       subscribe-nomail command to majordomo@postgresql.org so that your
> >       message can get through to the mailing list cleanly
> >
> >


Re: Large queries; fetchsize, cursors and limit/offset

From
Dave Cramer
Date:
This is the behaviour of the current CVS driver, which will be released
with 7.4

It will work with a 7.3 db and will work with jdk 1.3.1


As far as whether other drivers do it this way I have no idea. The spec
is vague in this regard. setFetchSize is just a hint to the driver, it
can implement it any way it sees fit.

I would expect Oracle to behave this way, but have no idea if it
actually does.

Dave

On Mon, 2003-11-03 at 20:39, David Wall wrote:
> Does that happen with the JDBC driver that comes with PG 7.3.4 or do I need
> a more recent version?  Does it with JDK1.3.1?  Is this behavior standard
> for PG JDBC (that is, will it be supported going forward as "the way" to
> handle cursors via JDBC?)?  That certainly is a great way to implement
> matters as far as I can tell.
>
> Does anybody know if other JDBC drivers are going to do the same thing?  One
> critical aspect for our application is that I also have to make it work
> under Oracle 8i (and soon 9i and beyond) right now, with the probability
> that we'd have to support DB2/UDB and MSFT SQLServer over time.
>
> Thanks for this tip, though.
>
> David
>
> ----- Original Message -----
> From: "Dave Cramer" <pg@fastcrypt.com>
> To: "David Wall" <d.wall@computer.org>
> Cc: "Rod Taylor" <rbt@rbt.ca>; <pgsql-jdbc@postgresql.org>
> Sent: Monday, November 03, 2003 4:01 PM
> Subject: Re: [JDBC] Large queries; fetchsize, cursors and limit/offset
>
>
> > The driver uses cursors now if you do two things
> >
> > 1) request must be made inside a transaction ( cursors for large result
> > sets should be anyway)
> > 2) call setFetchSize()
> >
> > Dave
> >
> >
> >
> > On Mon, 2003-11-03 at 18:49, David Wall wrote:
> > > Does anybody have some sample code that shows how to declare a simple
> cursor
> > > and fetch it 100 rows at a time, for example?  I'm curious how to
> formulate
> > > this.  It sounds like I need to keep the same Connection object, which
> is
> > > good info to have.  How do I issue the DECLARE CURSOR and FETCH (using
> > > PreparedStatement.execute() with the FETCH being an executeQuery() so
> that I
> > > get a result set back)?
> > >
> > > >From what I gather, the SQL itself looks something like:
> > >
> > > DECLARE my_cursor CURSOR FOR SELECT x,y,z FROM abc_table WHERE x>4;
> > >
> > > FETCH 100 FROM my_cursor;
> > >
> > > CLOSE cursor;
> > >
> > > What do I call when I'm doing the DECLARE CURSOR, versus the FETCH
> versus
> > > the CLOSE commands?
> > >
> > > Does anybody know if this sort of code would then work in Oracle 8i if I
> > > used a modified set of Oracle commands, i.e. something like:
> > >
> > > DECLARE CURSOR my_cursor FOR SELECT x,y,z FROM abc_table WHERE x>4;
> > > END;
> > >
> > > FOR 100 FETCH my_cursor;  ???  No "host variables" with JDBC so I'm not
> sure
> > > I can do this since the syntax implies an "INTO" clause for using host
> > > variables.
> > >
> > > Thanks,
> > > David
> > >
> > >
> > > ---------------------------(end of broadcast)---------------------------
> > > TIP 3: if posting/reading through Usenet, please send an appropriate
> > >       subscribe-nomail command to majordomo@postgresql.org so that your
> > >       message can get through to the mailing list cleanly
> > >
> > >
>
>