Barry, Tom, et al,
Thanks for your help. I really appreciate it.
Okay, I changed the PLpgSQL function to use select for update rather than
locking the table explicitly. Now I'm getting different errors. Running in
auto-commit and read-committed modes, I am seeing the same error as before:
thread A is updating the (locked) row between thread B selecting and then
updating it. This causes thread B's update to affect 0 rows which I'm
trying to avoid.
Running in serializable mode, I'm getting a Postgres exception:
ERROR: Can't serialize access due to concurrent update
It seems to me that the table locks grabbed in the PLpgSQL function aren't
actually locking the tables. They check to make sure they can *get* the
lock, but don't actually hold the lock. Same with the select for update. It
makes sure it can get the lock, but still lets others get the same lock.
Anyway, here's how I'm doing my transaction level setting in Java.
IDFactorySQL gets a name key (String) and Connection object in its
constructor, which it passes to an internal init() method where it sets the
transaction handling:
protected void init ( Connection conn , String name )
{
this.conn = conn;
this.name = name;
try
{
//
conn.setTransactionIsolation(Connection.TRANSACTION_READ_COMMITTED);
conn.setTransactionIsolation(Connection.TRANSACTION_SERIALIZABLE);
conn.setAutoCommit(false);
}
catch ( SQLException e )
{
invalidate();
}
}
I've tried both transaction levels separately as well as not setting it at
all [but still calling setAutoCommit(false)] which I understand should
leave me with read-committed level. Then, before calling the PLpgSQL
function next_id_block(), I've tried again setting auto-commit to false as
well as not doing so:
stmt = conn.prepareStatement("select next_id_block(?, ?)");
stmt.setString(1, name);
stmt.setInt(2, count);
conn.setAutoCommit(false);
result = stmt.executeQuery();
...
conn.commit();
I roll back in the case of any SQLException, but at that point the test
stops as it's broken. Any other ideas?
Peace,
Dave