Re: JDBC and processing large numbers of rows - Mailing list pgsql-jdbc

From Sean Shanny
Subject Re: JDBC and processing large numbers of rows
Date
Msg-id 40A17610.7040600@earthlink.net
Whole thread Raw
In response to Re: JDBC and processing large numbers of rows  ("David Wall" <d.wall@computer.org>)
Responses Re: JDBC and processing large numbers of rows
Re: JDBC and processing large numbers of rows
List pgsql-jdbc
David,

If you are simply trying to limit the amount of data that comes over the wire so as not to blow out the java process do the following:

Connection myConnection = your connection pool
myConnection.setAutoCommit(false);  <---- Make sure you do this.

PreparedStatement ps = ....
ps.setFetchSize(5000);

ps.executeQuery(.....);

This will bring back the result set in 5000 row chunks.

Make sure you do not end your SQL with a ; (semi colon) as that will cause the fetching part not to work.  Don't know why but it does. :-)

--sean

David Wall wrote:
Thanks, Dave.  Does anybody have any simple examples of the series of JDBC
calls used to declare, open, fetch and close a cursor in PG?  In Oracle?  I
know this is a PG list, so if no Oracle examples, can anybody at least
confirm that using cursors with Oracle and standard JDBC is possible?

There's nothing like having to write custom code to implement what Java
purports to be write once, run anywhere!  It seems that the JDBC spec would
have to be severely lacking if you can't do something as simple (and old) as
use cursors in a standard way.

>From what little I can gather, it seems that in PG, I'd do something like:

ps = connection.prepareStatement("DECLARE mycursor CURSOR FOR SELECT a,b
FROM mytable;");
ps.execute();
ps = connection.prepareStatement("FETCH 100 FROM mycursor;");
ResultSet rs = ps.executeQuery();
...process the resultset....possibly doing more FETCHes and getting more
resultsets...
ps = connect.prepareStatement("CLOSE mycursor;");  // is that needed, or
will it close on commit?
connection.commit();

Is that even close?

In Oracle, this seems even more questionable because the FETCH semantics
appear to want to use host variables, so I'm not even sure what the FETCH
statement would look like to get the data back in a ResultSet.

ps = connection.prepareStatement("DECLARE CURSOR mycursor FOR SELECT a,b
FROM mytable; END;");
ps.execute();
ps = connection.prepareStatement("FOR 100 FETCH mycursor [INTO????];");
ResultSet rs = ps.executeQuery();
...process the resultset....possibly doing more FETCHes and getting more
resultsets...
ps = connect.prepareStatement("CLOSE mycursor;");  // is that needed, or
will it close on commit?
connection.commit();


Does anybody out there have real experience doing any of this?

Thanks,
David


----- Original Message ----- 
From: "Dave Cramer" <pg@fastcrypt.com>
To: "David Wall" <d.wall@computer.org>
Cc: <pgsql-jdbc@postgresql.org>
Sent: Tuesday, May 11, 2004 3:51 PM
Subject: Re: [JDBC] JDBC and processing large numbers of rows

 
Well, if all else fails you may have to write a wrapper around them to
deal with the discrepancies between oracle and postgres.

One thing though, be warned holdable cursors in postgres have to be
materialized, so you may end up running out of server memory. This means
that you need to be inside a transaction to get a non-holdable cursor.

--dc--

On Tue, 2004-05-11 at 16:32, David Wall wrote:   
Use cursors to page through really large result sets       
Well, I've thought about that, but that just led me to my 3rd question     
in my 
previous inquiry:
     
3) How do people use cursors in JDBC?  Being able to FETCH seems         
like a 
nice     
way to handle question #2 above in a batch program, since only a         
subset 
of     
rows needs to be retrieved from the db at a time.  Cursors probably         
don't     
work for question #1 above since keeping a transaction alive across         
page 
views is generally frowned upon and even hard to accomplish since it         
means     
locking up a connection to the db for each paging user.         
The question for me is how do you portably use cursors so that you can     
work 
with Oracle and PG seamlessly?  I presume there might be some     
(hopefully) 
slight variations, like there are with BLOBs, but it would be nice if     
using 
cursors was standardized enough to make it using standard JDBC.

It seems that the issues are with defining a cursor, executing it,     
fetching 
against it, then release it when done.  Is there a standard way to do     
this? 
Any examples?

Thanks,
David



!DSPAM:40a138a962802251020430!

     
-- 
Dave Cramer
519 939 0336
ICQ # 14675561


---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings   

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate     subscribe-nomail command to majordomo@postgresql.org so that your     message can get through to the mailing list cleanly
 

pgsql-jdbc by date:

Previous
From: "David Wall"
Date:
Subject: Re: JDBC and processing large numbers of rows
Next
From: Oliver Jowett
Date:
Subject: Re: JDBC and processing large numbers of rows