Thread: Why is an error not thrown when the time exceeds the lock timeout for an ExclusiveLock on a transaction ?
Why is an error not thrown when the time exceeds the lock timeout for an ExclusiveLock on a transaction ?
Hi Everyone,
I have an SQL that updates a table in our database. The SQL in question is:
UPDATE public.sum_partn_alloc AS T1 SET effective_date = to_char(CURRENT_TIMESTAMP + interval '6 days', 'YYYYMMDDHH24MI') FROM public.sum_main AS T2
WHERE T1.status != 8 AND (T1.effective_date = '0' OR CURRENT_TIMESTAMP + interval '3 days' > to_timestamp(T1.effective_date, 'YYYYMMDDHH24MI'))
AND T1.ds_index IN (1741052236) AND T1.ds_index = T2.ds_index AND T2.online_status = 'Y'
where the number in BOLD in the list can be just a single number, or a long list of numbers (20-50 or so).
Normally, this SQL is executed without problem thousands of times a day.
Then, we see at certain times when that SQL exceeds the timeout for the ShareLock on the transaction. No error is thrown from the DB and processing on the driving Python script stops.
I've seen this in the case of a single number in the list, or a long list of 20-50 numbers or so.
I see in the log at that point : process 683860 still waiting for ShareLock on transaction 492777941 after 1000.140 ms
while updating tuple (1282991,25) in relation "sum_partn_alloc"
The Postgres server (12.22), running on RHEL 8.10 is configured with a default lock timeout of 1 sec.
Why is Postgres not throwing an error when the ShareLock time has exceeded 1 sec. ?
The Python script driving this SQL never gets an error from Postgres in it's exception block, and hence the script simply stops at that point and processing stops.
Are there other parameters one can set in the configuration that would result in the Python script getting an error from Postgres so that this case can be handled and processing continues ?
Any ideas or tips would be greatly appreciated.
Thanks,
--Ed
Ed Mansky Software Engineer SDAC / VSO NASA Goddard Space Flight Center ADNET SYSTEMS, Inc. 8800 Greenbelt Rd, Greenbelt MD 20771 |
Attachment
Re: Why is an error not thrown when the time exceeds the lock timeout for an ExclusiveLock on a transaction ?
"Mansky, Edmund J. (GSFC-671.0)[ADNET Affiliate]" <edmund.j.mansky@nasa.gov> writes: > I see in the log at that point : process 683860 still waiting for ShareLock on transaction 492777941 after 1000.140 ms > while updating tuple (1282991,25) in relation "sum_partn_alloc" > The Postgres server (12.22), running on RHEL 8.10 is configured with a default lock timeout of 1 sec. > Why is Postgres not throwing an error when the ShareLock time has exceeded 1 sec. ? I think you have misread the description of deadlock_timeout: it is the lock wait time after which we check to see if there's a deadlock. If there's not, we just log the above message (if configured to do so) and keep waiting. If you want to fail after X amount of time, lock_timeout or perhaps statement_timeout is what to set for that. regards, tom lane
Re: Why is an error not thrown when the time exceeds the lock timeout for an ExclusiveLock on a transaction ?
Server-Side Timeout Configuration:
- 2. lock_timeout:
- Then dump the client-side timeout configuration thru Node.js code
Node.jsclient.query(sql, { timeout: <milliseconds> })
.2. Wrap Queries with atry...catch
Block: - something like this should work thru javascript Node.js
const { Pool } = require('pg');
const pool = new Pool({
connectionString: 'your_connection_string'
});
async function executeQuery(sql) {
const client = await pool.connect();
try {
const result = await client.query(sql, { timeout: 5000 }); // Timeout after 5 seconds
return result;
} catch (err) {
console.error('Error executing query:', err);
throw err; // Re-throw to propagate the error
} finally {
client.release();
}
}
if none of the server timeout parameters or client side timeout debug statements help you identify
you may need to install pg_timeout extension to Postgres
Sent: Friday, April 18, 2025 1:28 PM
To: pgsql-general@postgresql.org <pgsql-general@postgresql.org>
Cc: Mansky, Edmund J. (GSFC-671.0)[ADNET Affiliate] <edmund.j.mansky@nasa.gov>
Subject: Why is an error not thrown when the time exceeds the lock timeout for an ExclusiveLock on a transaction ?
Hi Everyone,
I have an SQL that updates a table in our database. The SQL in question is:
UPDATE public.sum_partn_alloc AS T1 SET effective_date = to_char(CURRENT_TIMESTAMP + interval '6 days', 'YYYYMMDDHH24MI') FROM public.sum_main AS T2
WHERE T1.status != 8 AND (T1.effective_date = '0' OR CURRENT_TIMESTAMP + interval '3 days' > to_timestamp(T1.effective_date, 'YYYYMMDDHH24MI'))
AND T1.ds_index IN (1741052236) AND T1.ds_index = T2.ds_index AND T2.online_status = 'Y'
where the number in BOLD in the list can be just a single number, or a long list of numbers (20-50 or so).
Normally, this SQL is executed without problem thousands of times a day.
Then, we see at certain times when that SQL exceeds the timeout for the ShareLock on the transaction. No error is thrown from the DB and processing on the driving Python script stops.
I've seen this in the case of a single number in the list, or a long list of 20-50 numbers or so.
I see in the log at that point : process 683860 still waiting for ShareLock on transaction 492777941 after 1000.140 ms
while updating tuple (1282991,25) in relation "sum_partn_alloc"
The Postgres server (12.22), running on RHEL 8.10 is configured with a default lock timeout of 1 sec.
Why is Postgres not throwing an error when the ShareLock time has exceeded 1 sec. ?
The Python script driving this SQL never gets an error from Postgres in it's exception block, and hence the script simply stops at that point and processing stops.
Are there other parameters one can set in the configuration that would result in the Python script getting an error from Postgres so that this case can be handled and processing continues ?
Any ideas or tips would be greatly appreciated.
Thanks,
--Ed
Ed Mansky Software Engineer SDAC / VSO NASA Goddard Space Flight Center ADNET SYSTEMS, Inc. 8800 Greenbelt Rd, Greenbelt MD 20771 |