Re: [GENERAL] idle in transaction, why - Mailing list pgsql-general
From | Rob Sargent |
---|---|
Subject | Re: [GENERAL] idle in transaction, why |
Date | |
Msg-id | c585156c-f0b6-64e2-5fcd-27d73664d5b5@gmail.com Whole thread Raw |
In response to | Re: [GENERAL] idle in transaction, why (Merlin Moncure <mmoncure@gmail.com>) |
List | pgsql-general |
On 11/06/2017 02:38 PM, Merlin Moncure wrote:
On Mon, Nov 6, 2017 at 2:50 PM, Rob Sargent <robjsargent@gmail.com> wrote:On 11/06/2017 01:41 PM, Tom Lane wrote:Rob Sargent <robjsargent@gmail.com> writes:idle_in_transaction_session_timeout | 0 | default | | | A value of 0 turns off the timeout. | userMeh. I think we're barking up the wrong tree anyway: so far as I can find, there is no error message reading 'idle transaction timeout' in the existing PG sources (and I sure hope no committer would have thought that such an ambiguous message text was satisfactory). So I think your error is coming from client-side or third-party code. What other moving parts have you got in there? regards, tom laneThe most likely culprit is JOOQ, which I chose as a learning experience (normally I use ORM tools). But that said, I just ran the same data into my test env, (postgres 10.0 (real) on centos 6.9, ubuntu client) and all went swimmingly. It's a sizable payload (several batches of over 100K items, deserialized from json) and takes 5 minutes to save. I was hoping to blame the virt or the beta. Not a good time to start doubt JOOQI can't speak to JOOQ (who are fantastic postgres supporters BTW), but in the java world this typically comes from one of two things: 1) you have long running in-transaction process that has very heavy computation between sql statements. this is a rare case --or-- 2) you are connecting pooling and the app sent a connection back into the pool without having a transaction committed. "2" is a common and dangerous bug. It can happen due to bug in application code (most likely), the jdbc wrapping library code (less likely), or the connection pooler itself if you're using one. A typical cause of application side problems is manual transaction management and some uncaught exception paths where errors (say, a duplicate key error). So investigate causes like that first (database errors in the database log might be a helpful clue) and go from there. If the problem is within JOOQ, you ought to take it up with them, which I encourage you to do, since I consider JOOQ to be a wonderful treatment of SQL integration from the java perspective. merlin
"2" definitely fits this bill. The difference between test and prod is pgboucer which I've forgotten to mention at all in this thread. I do start a tx in my code a la:
public void writedb(DSLContext ctx) {But I don't think this is out of the ordinary. However writing lists with up to 1,175,151 records might not be (2 this size, to at 131K). I'll take this up with JOOQ and pgbouncer. (fasterxml is having trouble with this size too. Not sure how I can break this up if need be done, they're all generate from the same analysis run.)
logger.error("{}: start transaction at {}", getRunTag(), System.currentTimeMillis());
ctx.transaction(ltx -> {
startProcess(ctx);
writeSegments(ctx);
finishProcess(ctx);
});
logger.error("{}: end transaction at {}", getRunTag(), System.currentTimeMillis());
}
Thanks to all.
pgsql-general by date: