[HACKERS] Moving relation extension locks out of heavyweight lock manager - Mailing list pgsql-hackers

From Masahiko Sawada
Subject [HACKERS] Moving relation extension locks out of heavyweight lock manager
Date
Msg-id CAD21AoCmT3cFQUN4aVvzy5chw7DuzXrJCbrjTU05B+Ss=Gn1LA@mail.gmail.com
Whole thread Raw
Responses Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager  (Robert Haas <robertmhaas@gmail.com>)
Re: [HACKERS] Moving relation extension locks out of heavyweight lock manager  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
Hi all,

Currently, the relation extension lock is implemented using
heavyweight lock manager and almost functions (except for
brin_page_cleanup) using LockRelationForExntesion use it with
ExclusiveLock mode. But actually it doesn't need multiple lock modes
or deadlock detection or any of the other functionality that the
heavyweight lock manager provides. I think It's enough to use
something like LWLock. So I'd like to propose to change relation
extension lock management so that it works using LWLock instead.

Attached draft patch makes relation extension locks uses LWLock rather
than heavyweight lock manager, using by shared hash table storing
information of the relation extension lock. The basic idea is that we
add hash table in shared memory for relation extension locks and each
hash entry is LWLock struct. Whenever the process wants to acquire
relation extension locks, it searches appropriate LWLock entry in hash
table and acquire it. The process can remove a hash entry when
unlocking it if nobody is holding and waiting it.

This work would be helpful not only for existing workload but also
future works like some parallel utility commands, which is discussed
on other threads[1]. At least for parallel vacuum, this feature helps
to solve issue that the implementation of parallel vacuum has.

I ran pgbench for 10 min three times(scale factor is 5000), here is a
performance measurement result.

clients   TPS(HEAD)   TPS(Patched)
4           2092.612       2031.277
8           3153.732       3046.789
16         4562.072       4625.419
32         6439.391       6479.526
64         7767.364       7779.636
100       7917.173       7906.567

* 16 core Xeon E5620 2.4GHz
* 32 GB RAM
* ioDrive

In current implementation, it seems there is no performance degradation so far.
Please give me feedback.

[1]
* Block level parallel vacuum WIP
   <https://www.postgresql.org/message-id/CAD21AoD1xAqp4zK-Vi1cuY3feq2oO8HcpJiz32UDUfe0BE31Xw%40mail.gmail.com>
* CREATE TABLE with parallel workers, 10.0?
  <https://www.postgresql.org/message-id/CAFBoRzeoDdjbPV4riCE%2B2ApV%2BY8nV4HDepYUGftm5SuKWna3rQ%40mail.gmail.com>
* utility commands benefiting from parallel plan
  <https://www.postgresql.org/message-id/CAJrrPGcY3SZa40vU%2BR8d8dunXp9JRcFyjmPn2RF9_4cxjHd7uA%40mail.gmail.com>

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: [HACKERS] Should pg_current_wal_location() becomepg_current_wal_lsn()
Next
From: Michael Paquier
Date:
Subject: Re: [HACKERS] Should pg_current_wal_location() become pg_current_wal_lsn()