I am trying to do a full table scan on a large table from Java, using a straightforward "select * from foo". I've run into these problems:
1. By default, the PG JDBC driver attempts to suck the entire result set into RAM, resulting in java.lang.OutOfMemoryError ... this is not cool, in fact I consider it a serious bug (even MySQL gets this right ;-) I am only testing with a 9GB result set, but production needs to scale to 200GB or more, so throwing hardware at is is not feasible.
2. I tried using the official taming method, namely java.sql.Statement.setFetchSize(1000) and this makes it blow up entirely with an error I have no context for, as follows (the number C_10 varies, e.g. C_12 last time) ...
org.postgresql.util.PSQLException: ERROR: portal "C_10" does not exist at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1592) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1327) at org.postgresql.core.v3.QueryExecutorImpl.fetch(QueryExecutorImpl.java:1527) at org.postgresql.jdbc2.AbstractJdbc2ResultSet.next(AbstractJdbc2ResultSet.java:1843)
This is definitely a bug :-)
Is there a known workaround for this ... will updating to a newer version of the driver fix this?
Is there a magic incation of JDBC calls that will tame it?
Can I cast the objects to PG specific types and access a hidden API to turn off this behaviour?
If the only workaround is to explicitly create a cursor in PG, is there a good example of how to do this from Java?
From:
Dave Crooke Date: Subject:
Re: [JDBC] SOLVED ... Re: Getting rid of a cursor from JDBC .... Re:
Re: HELP: How to tame the 8.3.x JDBC driver with a biq guery result
set
Есть вопросы? Напишите нам!
Соглашаюсь с условиями обработки персональных данных
✖
By continuing to browse this website, you agree to the use of cookies. Go to Privacy Policy.