-I'm getting back 4 million results
-the underlying query is quite complex joining several tables, and the core table is partitioned over a couple hundred tables.
Here's the java code:
conn.setAutoCommit(false);
pstmt = conn.prepareStatement("select * from foo(?,?,?,?,?,?,?)");
pstmt.setFetchSize(1000);
//and, then I do the usual thing to set parameters in the prepared statement
stmt.setInt(1, SOME_NUMBER);
pstmt.setArray(2, genepoolConn.createArrayOf("integer", ARRAY_OF_INTEGERS));
//and so on
What I have found:
-if ARRAY_OF_INTEGERS has a length of 3, the query performs fast
-if ARRAY_OF_INTEGERS has a length of 150, then query takes 1200 seconds
-if ARRAY_OF_INTEGERS has a length of 150, and I don't use a prepared statement, the query takes the expected 30 seconds
So, obviously when using a prepared statement, postgres is coming up with the wrong execution plan when the the parameter list is unexpectedly large. Pretty understandable.
My question: is there a work around to this? Can I force it not set the execution plan until I bind the variables?
The only reasons I'm using a prepared statement:
-parameterized queries are far easier to work with than building my query via string concatenation
-minimize chances of SQL Injection
If only there was a way to have parameterized queries without using prepared statements....
Thanks!