Thread: cursor "jdbc_curs_1" does not exist
Hi! org.postgresql.util.PSQLException: ERROR: cursor "jdbc_curs_1" does not exist Can somebody explain this exception? What I do is pretty simple: I read text from one table, gzip it and insert it into another table. But it seems that the exception is thrown at ResulSet.next() but I'm not sure. Here's the code anyway: final PreparedStatement select = f4t.prepareStatement( "SELECT urlid, html FROM html" ); final PreparedStatement insert = f4t.prepareStatement( "INSERT INTO legacy.zhtml (id, zhtml) VALUES (?, ?)" ); f4t.setAutoCommit( false ); select.setFetchSize( FETCH_SIZE ); int i = 0; for( ResultSet row = select.executeQuery(); row.next(); ) { final String id = row.getString( "urlid" ); final String html = row.getString( "html" ); final ByteArrayOutputStream bos = new ByteArrayOutputStream(); final DataOutputStream dos = new DataOutputStream( bos ); WritableUtils.writeCompressedString( dos, html ); insert.setString( 1, id ); insert.setBytes( 2, bos.toByteArray() ); insert.execute(); System.out.print( '.' ); if( ++i % 100 == 0 ) { System.out.println(); f4t.commit(); } } f4t.commit(); f4t.setAutoCommit( true );
On Tue, 15 Jun 2004 tmp@nitwit.de wrote: > Hi! > > org.postgresql.util.PSQLException: ERROR: cursor "jdbc_curs_1" does not exist > > Can somebody explain this exception? What I do is pretty simple: I read text > from one table, gzip it and insert it into another table. But it seems that > the exception is thrown at ResulSet.next() but I'm not sure. Here's the code > anyway: The problem here is that you commit every 100 rows. A commit closes any open cursors, so the next time you try to call next the cursor isn't there any more. Kris Jurka > > > final PreparedStatement select = f4t.prepareStatement( "SELECT urlid, html > FROM html" ); > final PreparedStatement insert = f4t.prepareStatement( "INSERT INTO > legacy.zhtml (id, zhtml) VALUES (?, ?)" ); > > f4t.setAutoCommit( false ); > > select.setFetchSize( FETCH_SIZE ); > > int i = 0; > for( ResultSet row = select.executeQuery(); row.next(); ) > { > final String id = row.getString( "urlid" ); > final String html = row.getString( "html" ); > > final ByteArrayOutputStream bos = new ByteArrayOutputStream(); > final DataOutputStream dos = new DataOutputStream( bos ); > > WritableUtils.writeCompressedString( dos, html ); > > insert.setString( 1, id ); > insert.setBytes( 2, bos.toByteArray() ); > insert.execute(); > > System.out.print( '.' ); > > if( ++i % 100 == 0 ) > { > System.out.println(); > f4t.commit(); > } > } > > f4t.commit(); > f4t.setAutoCommit( true ); > > > > ---------------------------(end of broadcast)--------------------------- > TIP 8: explain analyze is your friend >
On Tuesday 15 June 2004 18:06, you wrote: > The problem here is that you commit every 100 rows. A commit closes any > open cursors, so the next time you try to call next the cursor isn't there > any more. Hmm. And why does this code work then: for( int j = 1; row.next(); j++ ) { String id = row.getString( "id" ); URL url = new URL( row.getString( "url" ) ); Timestamp time = row.getTimestamp( "timestamp" ); try { insertUrls.setString( 1, id ); insertUrls.setString( 2, url.toString() ); insertUrls.setString( 3, resp.getText() ); insertUrls.execute(); } catch( Exception e ) { System.err.println( e ); } finally { if( j % 100 == 0 ) { System.out.println(); f4t.commit(); } } }
On Tue, 15 Jun 2004 tmp@nitwit.de wrote: > On Tuesday 15 June 2004 18:06, you wrote: > > The problem here is that you commit every 100 rows. A commit closes any > > open cursors, so the next time you try to call next the cursor isn't there > > any more. > > Hmm. And why does this code work then: It is unclear what the differences here are. You haven't shown the Statement creation and/or execution, so it's possible that you haven't used setFetchSize or you've specified a scrollable ResultSet. Maybe auto commit is on. Any of these will cause a cursor not to be used and the results will be fetched in bulk, so an intermediate commit won't cause any problems. Maybe the query doesn't retrieve 100 rows, so it doesn't get to the commit part. Kris Jurka > > for( int j = 1; row.next(); j++ ) > { > String id = row.getString( "id" ); > URL url = new URL( row.getString( "url" ) ); > Timestamp time = row.getTimestamp( "timestamp" ); > > try > { > insertUrls.setString( 1, id ); > insertUrls.setString( 2, url.toString() ); > insertUrls.setString( 3, resp.getText() ); > insertUrls.execute(); > } > catch( Exception e ) > { > System.err.println( e ); > } > finally > { > if( j % 100 == 0 ) > { > System.out.println(); > f4t.commit(); > } > } > } > > ---------------------------(end of broadcast)--------------------------- > TIP 7: don't forget to increase your free space map settings >
On Tuesday 15 June 2004 19:38, you wrote: > It is unclear what the differences here are. You haven't shown the > Statement creation and/or execution, so it's possible that you haven't > used setFetchSize or you've specified a scrollable ResultSet. Maybe auto > commit is on. Any of these will cause a cursor not to be used and the > results will be fetched in bulk, so an intermediate commit won't cause any > problems. Maybe the query doesn't retrieve 100 rows, so it doesn't get to > the commit part. And why doesn't this work: db.setAutoCommit(false); final Statement select = db.createStatement(ResultSet.HOLD_CURSORS_OVER_COMMIT, ResultSet.CONCUR_READ_ONLY); select.setFetchSize( Configuration.FETCH_SIZE ); int i = 0; for( final ResultSet row = select.executeQuery( sql ); row.next(); i++) { ... insert.execute(); continue; if (i%100==0) db.commit(); } BTW ResultSet.TYPE_SCROLL_INSENSITIVE doesn't work either :-(
tmp@nitwit.de wrote: > And why doesn't this work: > > db.setAutoCommit(false); > > final Statement select = > db.createStatement(ResultSet.HOLD_CURSORS_OVER_COMMIT, > ResultSet.CONCUR_READ_ONLY); You are passing the wrong constants here. The first argument to createStatement should be the resultset type constant (ResultSet.TYPE_*) not a cursor holdability constant. You probably meant: final Statement select = db.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY, ResultSet.HOLD_CURSORS_OVER_COMMIT); The current driver will throw an exception if you try this, as HOLD_CURSORS_OVER_COMMIT is not supported (actually, it throws in *all* cases if you use the three-arg createStatement method -- I have that fixed in my tree) I note that the current driver returns true from DatabaseMetadata.supportsResultSetHoldability(ResultSet.HOLD_CURSORS_OVER_COMMIT) which is wrong. -O
On Thursday 17 June 2004 15:17, you wrote: > You probably meant: > > final Statement select = db.createStatement(ResultSet.TYPE_FORWARD_ONLY, > ResultSet.CONCUR_READ_ONLY, ResultSet.HOLD_CURSORS_OVER_COMMIT); Oops, of course :-) > The current driver will throw an exception if you try this, as > HOLD_CURSORS_OVER_COMMIT is not supported (actually, it throws in *all* :-( Is there a schedule when it will be supported? But the following also doesn't work: final Statement select = db.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, ResultSet.CONCUR_READ_ONLY); So I really wonder how I should solve my problem other than calling commit() once at the very end...
tmp@nitwit.de wrote: >>The current driver will throw an exception if you try this, as >>HOLD_CURSORS_OVER_COMMIT is not supported (actually, it throws in *all* > > > :-( > > Is there a schedule when it will be supported? No schedule. Patches are welcome. I suggest you look in the archives first -- there has been lengthy discussion of the various tradeoffs involved in SCROLL vs. NO SCROLL and WITH HOLD vs. WITHOUT HOLD. > But the following also doesn't work: > > final Statement select = db.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE, > ResultSet.CONCUR_READ_ONLY); What behaviour did you expect here? IIRC, the default holdability is CLOSE_CURSORS_ON_COMMIT. > So I really wonder how I should solve my problem other than calling commit() > once at the very end... Options that spring to mind: - Commit once at the end, as you suggest. I'd do this unless there's a real advantage in breaking it into smaller transactions. - Use a fetchsize of 0 which will disable cursors and cache the entire resultset in memory. That's not going to work well if you have a large resultset. - Use a separate connection for the insert transactions; as transaction state is per-connection, committing an insert transaction won't invalidate resultsets obtained from the other connection. -O