Re: Delay locking partitions during query execution - Mailing list pgsql-hackers

From Amit Langote
Subject Re: Delay locking partitions during query execution
Date
Msg-id fe648f94-9cff-4282-dd12-a1a5bf64cd3b@lab.ntt.co.jp
Whole thread Raw
In response to Re: Delay locking partitions during query execution  (David Rowley <david.rowley@2ndquadrant.com>)
Responses Re: Delay locking partitions during query execution
List pgsql-hackers
On 2019/01/28 20:27, David Rowley wrote:
> On Mon, 28 Jan 2019 at 20:45, Amit Langote
> <Langote_Amit_f8@lab.ntt.co.jp> wrote:
>> It seems to me that plancache.c doesn't really need to perform
>> AcquireExecutorLocks()/LockRelationOid() to learn that a partition's
>> reloptions property has changed to discard a generic plan and build a new
>> one.  AFAICT, PlanCacheRelCallback() takes care of resetting a cached plan
>> by observing that an invalidation message that it received  either from
>> the same session or from another session belongs to one of the relations
>> in PlannedStmt.relationOids.  That list must already contain all
>> partitions' OIDs.
> 
> Really? So when you tried my case you properly got a plan with a
> non-parallel Seq Scan on listp1?
> 
> I imagine you didn't with yours since we check for relcache
> invalidations at the start of a transaction.  I performed both my
> EXECUTEs in the same transaction.

Yeah, I performed each EXECUTE in a new transaction, so not the same case
as yours.  Sorry about the noise.

However, I tried the example as you described and the plan *doesn't*
change due to concurrent update of reloptions with master (without the
patch) either.

session 1:
begin;
prepare q1 as select count(*) from listp;
explain (costs off, analyze, timing off, summary off) execute q1;
                                   QUERY PLAN

─────────────────────────────────────────────────────────────────────────────────
 Finalize Aggregate (actual rows=1 loops=1)
   ->  Gather (actual rows=3 loops=1)
         Workers Planned: 2
         Workers Launched: 2
         ->  Partial Aggregate (actual rows=1 loops=3)
               ->  Parallel Append (actual rows=66667 loops=3)
                     ->  Parallel Seq Scan on listp1 (actual rows=33333
loops=3)
                     ->  Parallel Seq Scan on listp2 (actual rows=50000
loops=2)
(8 rows)

session 2:
alter table listp1 set (parallel_workers=0);

session 1:
explain (costs off, analyze, timing off, summary off) execute q1;
                                   QUERY PLAN

─────────────────────────────────────────────────────────────────────────────────
 Finalize Aggregate (actual rows=1 loops=1)
   ->  Gather (actual rows=3 loops=1)
         Workers Planned: 2
         Workers Launched: 2
         ->  Partial Aggregate (actual rows=1 loops=3)
               ->  Parallel Append (actual rows=66667 loops=3)
                     ->  Parallel Seq Scan on listp1 (actual rows=33333
loops=3)
                     ->  Parallel Seq Scan on listp2 (actual rows=50000
loops=2)
(8 rows)

I then built master with -DCATCACHE_FORCE_RELEASE and the plan does
change, but because of syscache misses here and there resulting in opening
the erring system catalog which then does AcceptInvalidationMessages().

In the normal build, invalidation messages don't seem to be processed even
by calling AcquireExecutorLocks(), which is perhaps not normal.  It seem
that the following condition in LockRelationOid is never true when called
from AcquireExecutorLocks:

    if (res != LOCKACQUIRE_ALREADY_CLEAR)
    {
        AcceptInvalidationMessages();
        MarkLockClear(locallock);
    }

Bug, maybe?

Thanks,
Amit



pgsql-hackers by date:

Previous
From: Andrey Borodin
Date:
Subject: Re: Covering GiST indexes
Next
From: David Rowley
Date:
Subject: Re: COPY FROM WHEN condition