Re: dynamic shared memory and locks - Mailing list pgsql-hackers

From Robert Haas
Subject Re: dynamic shared memory and locks
Date
Msg-id CA+TgmoaTXCE93BGn3+BVJVmeu98QibMK-ZQk85WoR2ty-Wab_A@mail.gmail.com
Whole thread Raw
In response to Re: dynamic shared memory and locks  (KaiGai Kohei <kaigai@ak.jp.nec.com>)
Responses Re: dynamic shared memory and locks
Re: dynamic shared memory and locks
List pgsql-hackers
On Mon, Jan 20, 2014 at 11:23 PM, KaiGai Kohei <kaigai@ak.jp.nec.com> wrote:
> I briefly checked the patch. Most of lines are mechanical replacement
> from LWLockId to LWLock *, and compiler didn't claim anything with
> -Wall -Werror option.
>
> My concern is around LWLockTranche mechanism. Isn't it too complicated
> towards the purpose?
> For example, it may make sense to add "char lockname[NAMEDATALEN];" at
> the tail of LWLock structure if LOCK_DEBUG is enabled. Probably, it
> also adds an argument of LWLockAssign() to gives the human readable
> name. Is the additional 64bytes (= NAMEDATALEN) per lock too large
> for recent hardware?

Well, we'd need it when either LOCK_DEBUG was defined or when
LWLOCK_STATS was defined or when --enable-dtrace was used, and while
the first two are probably rarely enough used that that would be OK, I
think the third case is probably fairly common, and I don't think we
want to have such a potentially performance-critical difference
between builds with and without dtrace.

Also... yeah, it's a lot of memory.  If we add an additional 64 bytes
to the structure, then we're looking at 96 bytes per lwlock instead of
32, after padding out to a 32-byte boundary to avoid crossing cache
lines.  We need 2 lwlocks per buffer, so that's an additional 128
bytes per 8kB buffer.  For shared_buffers = 8GB, that's an extra 128MB
for storing lwlocks.  I'm not willing to assume nobody cares about
that.  And while I agree that this is a bit complex, I don't think
it's really as bad as all that.  We've gotten by for a long time
without people being able to put lwlocks in other parts of memory, and
most extension authors have gotten by with LWLockAssign() just fine
and can continue doing things that way; only users with particularly
sophisticated needs should bother to worry about the tranche stuff.

One idea I just had is to improve the dsm_toc module so that it can
optionally set up a tranche of lwlocks for you, and provide some
analogues of RequestAddinLWLocks and LWLockAssign for that case.  That
would probably make this quite a bit simpler to use, at least for
people using it with dynamic shared memory.  But I think that's a
separate patch.

> Below is minor comments.
>
> It seems to me this newer form increased direct reference to
> the MainLWLockArray. Is it really graceful?
> My recommendation is to define a macro that wraps the reference to
> MainLWLockArray.
>
> For example:
>   #define PartitionLock(i) \
>         (&MainLWLockArray[LOCK_MANAGER_LWLOCK_OFFSET + (i)].lock)

Yeah, that's probably a good idea.

> This chunk is enclosed by #ifdef EXEC_BACKEND, the legacy LWLockArray
> was not removed.

Oops, will fix.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: Re: [Lsf-pc] Linux kernel impact on PostgreSQL performance (summary v2 2014-1-17)
Next
From: Stephen Frost
Date:
Subject: Re: proposal: hide application_name from other users