19.12. Lock Management
deadlock_timeout
(integer
)This is the amount of time to wait on a lock before checking to see if there is a deadlock condition. The check for deadlock is relatively expensive, so the server doesn't run it every time it waits for a lock. We optimistically assume that deadlocks are not common in production applications and just wait on the lock for a while before checking for a deadlock. Increasing this value reduces the amount of time wasted in needless deadlock checks, but slows down reporting of real deadlock errors. If this value is specified without units, it is taken as milliseconds. The default is one second (
1s
), which is probably about the smallest value you would want in practice. On a heavily loaded server you might want to raise it. Ideally the setting should exceed your typical transaction time, so as to improve the odds that a lock will be released before the waiter decides to check for deadlock. Only superusers can change this setting.When log_lock_waits is set, this parameter also determines the amount of time to wait before a log message is issued about the lock wait. If you are trying to investigate locking delays you might want to set a shorter than normal
deadlock_timeout
.max_locks_per_transaction
(integer
)The shared lock table tracks locks on
max_locks_per_transaction
* (max_connections + max_prepared_transactions) objects (e.g., tables); hence, no more than this many distinct objects can be locked at any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions can lock more objects as long as the locks of all transactions fit in the lock table. This is not the number of rows that can be locked; that value is unlimited. The default, 64, has historically proven sufficient, but you might need to raise this value if you have queries that touch many different tables in a single transaction, e.g., query of a parent table with many children. This parameter can only be set at server start.When running a standby server, you must set this parameter to the same or higher value than on the primary server. Otherwise, queries will not be allowed in the standby server.
max_pred_locks_per_transaction
(integer
)The shared predicate lock table tracks locks on
max_pred_locks_per_transaction
* (max_connections + max_prepared_transactions) objects (e.g., tables); hence, no more than this many distinct objects can be locked at any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions can lock more objects as long as the locks of all transactions fit in the lock table. This is not the number of rows that can be locked; that value is unlimited. The default, 64, has generally been sufficient in testing, but you might need to raise this value if you have clients that touch many different tables in a single serializable transaction. This parameter can only be set at server start.max_pred_locks_per_relation
(integer
)This controls how many pages or tuples of a single relation can be predicate-locked before the lock is promoted to covering the whole relation. Values greater than or equal to zero mean an absolute limit, while negative values mean max_pred_locks_per_transaction divided by the absolute value of this setting. The default is -2, which keeps the behavior from previous versions of Postgres Pro. This parameter can only be set in the
postgresql.conf
file or on the server command line.max_pred_locks_per_page
(integer
)This controls how many rows on a single page can be predicate-locked before the lock is promoted to covering the whole page. The default is 2. This parameter can only be set in the
postgresql.conf
file or on the server command line.lwlock_shared_limit
(integer
)Specifies the number of sequential shared LWLocks held before switching to the fair mode. In the fair mode, would-be shared lockers are added to the queue instead of acquiring the lock immediately, which allows to acquire an exclusive LWLock within reasonable time and thus avoid lock starvation. Setting this option to small values allows to acquire exclusive locks much faster, but can cause significant overhead. The larger the value, the lower the overhead and the speedup. As a rule of thumb, setting this option to 16 provides good speedup with minimal overhead. The default is 0 (the option is disabled). This parameter can only be set in the
postgresql.conf
file or on the server command line.log2_num_lock_partitions
(integer
)This controls how many partitions the shared lock tables are divided into. Number of partitions is calculated by raising 2 to the power of this parameter. The default value is 4, which corresponds to 16 partitions, and the maximum is 8. This parameter can only be set in the
postgresql.conf
file or on the server command line.