Thread: [HACKERS] [POC] Faster processing at Gather node

[HACKERS] [POC] Faster processing at Gather node

From
Rafia Sabih
Date:
Hello everybody,

While analysing the performance of TPC-H queries for the newly developed parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed that the time taken by gather node is significant. On investigation, as per the current method it copies each tuple to the shared queue and notifies the receiver. Since, this copying is done in shared queue, a lot of locking and latching overhead is there. 

So, in this POC patch I tried to copy all the tuples in a local queue thus avoiding all the locks and latches. Once, the local queue is filled as per it's capacity, tuples are transferred to the shared queue. Once, all the tuples are transferred the receiver is sent the notification about the same.

With this patch I could see significant improvement in performance for simple queries, 

head:
explain  analyse select * from t where i < 30000000;
                                                         QUERY PLAN                                                          
-----------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..83225.55 rows=29676454 width=19) (actual time=1.379..35871.235 rows=29999999 loops=1)
   Workers Planned: 64
   Workers Launched: 64
   ->  Parallel Seq Scan on t  (cost=0.00..83225.55 rows=463695 width=19) (actual time=0.125..1415.521 rows=461538 loops=65)
         Filter: (i < 30000000)
         Rows Removed by Filter: 1076923
 Planning time: 0.180 ms
 Execution time: 38503.478 ms
(8 rows)

patch:
 explain  analyse select * from t where i < 30000000;
                                                         QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..83225.55 rows=29676454 width=19) (actual time=0.980..24499.427 rows=29999999 loops=1)
   Workers Planned: 64
   Workers Launched: 64
   ->  Parallel Seq Scan on t  (cost=0.00..83225.55 rows=463695 width=19) (actual time=0.088..968.406 rows=461538 loops=65)
         Filter: (i < 30000000)
         Rows Removed by Filter: 1076923
 Planning time: 0.158 ms
 Execution time: 27331.849 ms
(8 rows)

head:
 explain  analyse select * from t where i < 40000000;
                                                         QUERY PLAN                                                          
-----------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..83225.55 rows=39501511 width=19) (actual time=0.890..38438.753 rows=39999999 loops=1)
   Workers Planned: 64
   Workers Launched: 64
   ->  Parallel Seq Scan on t  (cost=0.00..83225.55 rows=617211 width=19) (actual time=0.074..1235.180 rows=615385 loops=65)
         Filter: (i < 40000000)
         Rows Removed by Filter: 923077
 Planning time: 0.113 ms
 Execution time: 41609.855 ms
(8 rows)

patch:
explain  analyse select * from t where i < 40000000;
                                                         QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..83225.55 rows=39501511 width=19) (actual time=1.085..31806.671 rows=39999999 loops=1)
   Workers Planned: 64
   Workers Launched: 64
   ->  Parallel Seq Scan on t  (cost=0.00..83225.55 rows=617211 width=19) (actual time=0.083..954.342 rows=615385 loops=65)
         Filter: (i < 40000000)
         Rows Removed by Filter: 923077
 Planning time: 0.151 ms
 Execution time: 35341.429 ms
(8 rows)

head:
explain  analyse select * from t where i < 45000000;
                                                           QUERY PLAN                                                           
--------------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..102756.80 rows=44584013 width=19) (actual time=0.563..49156.252 rows=44999999 loops=1)
   Workers Planned: 32
   Workers Launched: 32
   ->  Parallel Seq Scan on t  (cost=0.00..102756.80 rows=1393250 width=19) (actual time=0.069..1905.436 rows=1363636 loops=33)
         Filter: (i < 45000000)
         Rows Removed by Filter: 1666667
 Planning time: 0.106 ms
 Execution time: 52722.476 ms
(8 rows)

patch:
 explain  analyse select * from t where i < 45000000;
                                                           QUERY PLAN                                                           
--------------------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..102756.80 rows=44584013 width=19) (actual time=0.545..37501.200 rows=44999999 loops=1)
   Workers Planned: 32
   Workers Launched: 32
   ->  Parallel Seq Scan on t  (cost=0.00..102756.80 rows=1393250 width=19) (actual time=0.068..2165.430 rows=1363636 loops=33)
         Filter: (i < 45000000)
         Rows Removed by Filter: 1666667
 Planning time: 0.087 ms
 Execution time: 41458.969 ms
(8 rows)

The improvement in performance is most when the selectivity is around 20-30%, in which case currently parallelism is not selected.

I am testing the performance impact of this on TPC-H queries, in the meantime would appreciate some feedback on the design, etc.

--
Regards,
Rafia Sabih
Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Fri, May 19, 2017 at 7:55 AM, Rafia Sabih
<rafia.sabih@enterprisedb.com> wrote:
> While analysing the performance of TPC-H queries for the newly developed
> parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed
> that the time taken by gather node is significant. On investigation, as per
> the current method it copies each tuple to the shared queue and notifies the
> receiver. Since, this copying is done in shared queue, a lot of locking and
> latching overhead is there.
>
> So, in this POC patch I tried to copy all the tuples in a local queue thus
> avoiding all the locks and latches. Once, the local queue is filled as per
> it's capacity, tuples are transferred to the shared queue. Once, all the
> tuples are transferred the receiver is sent the notification about the same.

What if, instead of doing this, we switched the shm_mq stuff to use atomics?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Fri, May 19, 2017 at 5:58 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, May 19, 2017 at 7:55 AM, Rafia Sabih
> <rafia.sabih@enterprisedb.com> wrote:
>> While analysing the performance of TPC-H queries for the newly developed
>> parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed
>> that the time taken by gather node is significant. On investigation, as per
>> the current method it copies each tuple to the shared queue and notifies the
>> receiver. Since, this copying is done in shared queue, a lot of locking and
>> latching overhead is there.
>>
>> So, in this POC patch I tried to copy all the tuples in a local queue thus
>> avoiding all the locks and latches. Once, the local queue is filled as per
>> it's capacity, tuples are transferred to the shared queue. Once, all the
>> tuples are transferred the receiver is sent the notification about the same.
>
> What if, instead of doing this, we switched the shm_mq stuff to use atomics?
>

That is one of the very first things we have tried, but it didn't show
us any improvement, probably because sending tuple-by-tuple over
shm_mq is not cheap.  Also, independently, we have tried to reduce the
frequency of SetLatch (used to notify receiver), but that also didn't
result in improving the results. Now, I think one thing that can be
tried is to use atomics in shm_mq and reduce the frequency to notify
receiver, but not sure if that can give us better results than with
this idea. There are a couple of other ideas which has been tried to
improve the speed of Gather like avoiding an extra copy of tuple which
we need to do before sending tuple
(tqueueReceiveSlot->ExecMaterializeSlot) and increasing the size of
tuple queue length, but none of those has shown any noticeable
improvement.  I am aware of all this because I and Dilip were offlist
involved in brainstorming ideas with Rafia to improve the speed of
Gather.  I think it might have been better to show the results of
ideas that didn't work out, but I guess Rafia hasn't shared those with
the intuition that nobody would be interested in hearing what didn't
work out.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: [HACKERS] [POC] Faster processing at Gather node

From
Alexander Kuzmenkov
Date:
Hi Rafia,

I like the idea of reducing locking overhead by sending tuples in bulk. 
The implementation could probably be simpler: you could extend the API 
of shm_mq to decouple notifying the sender from actually putting data 
into the queue (i.e., make shm_mq_notify_receiver public and make a 
variant of shm_mq_sendv that doesn't send the notification). From Amit's 
letter I understand that you have already tried something along these 
lines and the performance wasn't good. What was the bottleneck then? If 
it's the locking around mq_bytes_read/written, it can be rewritten with 
atomics. I think it would be great to try this approach because it 
doesn't add much code, doesn't add any additional copying and improves 
shm_mq performance in general.

-- 
Alexander Kuzmenkov
Postgres Professional:http://www.postgrespro.com
The Russian Postgres Company



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Fri, Sep 8, 2017 at 11:07 PM, Alexander Kuzmenkov
<a.kuzmenkov@postgrespro.ru> wrote:
> Hi Rafia,
>
> I like the idea of reducing locking overhead by sending tuples in bulk. The
> implementation could probably be simpler: you could extend the API of shm_mq
> to decouple notifying the sender from actually putting data into the queue
> (i.e., make shm_mq_notify_receiver public and make a variant of shm_mq_sendv
> that doesn't send the notification).
>

Rafia can comment on details, but I would like to bring it to your
notice that we need kind of local buffer (queue) for gathermerge
processing as well where the data needs to be fetched in order from
queues.  So, there is always a chance that some of the workers have
filled their queues while waiting for the master to extract the data.
I think the patch posted by Rafia on the nearby thread [1] addresses
both the problems by one patch.


[1] - https://www.postgresql.org/message-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS%3DfHiBJmbSOF74aBQ%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Rafia Sabih
Date:
On Sat, Sep 9, 2017 at 8:14 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Fri, Sep 8, 2017 at 11:07 PM, Alexander Kuzmenkov
> <a.kuzmenkov@postgrespro.ru> wrote:
>> Hi Rafia,
>>
>> I like the idea of reducing locking overhead by sending tuples in bulk. The
>> implementation could probably be simpler: you could extend the API of shm_mq
>> to decouple notifying the sender from actually putting data into the queue
>> (i.e., make shm_mq_notify_receiver public and make a variant of shm_mq_sendv
>> that doesn't send the notification).
>>
>
> Rafia can comment on details, but I would like to bring it to your
> notice that we need kind of local buffer (queue) for gathermerge
> processing as well where the data needs to be fetched in order from
> queues.  So, there is always a chance that some of the workers have
> filled their queues while waiting for the master to extract the data.
> I think the patch posted by Rafia on the nearby thread [1] addresses
> both the problems by one patch.
>
>
> [1] - https://www.postgresql.org/message-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS%3DfHiBJmbSOF74aBQ%40mail.gmail.com
>

Thanks Alexander for your interest in this work. As rightly pointed
out by Amit, when experimenting with this patch we found that there
are cases when master is busy and unable to read tuples in
shared_queue and the worker get stuck as it can not process tuples any
more. When experimenting aong these lines, I found that Q12 of TPC-H
is showing great performance improvement when increasing
shared_tuple_queue_size [1].
It was then we realised that merging this with the idea of giving an
illusion of larger tuple queue size with a local queue[1] could be
more beneficial. To precisely explain the meaning of merging the two
ideas, now we write tuples in local_queue once shared_queue is full
and as soon as we have filled some enough tuples in local queue we
copy the tuples from local to shared queue in one memcpy call. It is
giving good performance improvements for quite some cases.

I'll be glad if you may have a look at this patch and enlighten me
with your suggestions. :-)

[1] - https://www.postgresql.org/message-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS%3DfHiBJmbSOF74aBQ%40mail.gmail.com

-- 
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Alexander Kuzmenkov
Date:
Thanks Rafia, Amit, now I understand the ideas behind the patch better. 
I'll see if I can look at the new one.

-- 

Alexander Kuzmenkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
Hi Rafia,

On 2017-05-19 17:25:38 +0530, Rafia Sabih wrote:
> head:
> explain  analyse select * from t where i < 30000000;
>                                                          QUERY PLAN

Could you share how exactly you generated the data? Just so others can
compare a bit better with your results?

Regards,

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Rafia Sabih
Date:
On Tue, Oct 17, 2017 at 3:22 AM, Andres Freund <andres@anarazel.de> wrote:
> Hi Rafia,
>
> On 2017-05-19 17:25:38 +0530, Rafia Sabih wrote:
>> head:
>> explain  analyse select * from t where i < 30000000;
>>                                                          QUERY PLAN
>
> Could you share how exactly you generated the data? Just so others can
> compare a bit better with your results?
>

Sure. I used generate_series(1, 10000000);
Please find the attached script for the detailed steps.

-- 
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
Hi Everyone,

On 2017-05-19 17:25:38 +0530, Rafia Sabih wrote:
> While analysing the performance of TPC-H queries for the newly developed
> parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed
> that the time taken by gather node is significant. On investigation, as per
> the current method it copies each tuple to the shared queue and notifies
> the receiver. Since, this copying is done in shared queue, a lot of locking
> and latching overhead is there.
>
> So, in this POC patch I tried to copy all the tuples in a local queue thus
> avoiding all the locks and latches. Once, the local queue is filled as per
> it's capacity, tuples are transferred to the shared queue. Once, all the
> tuples are transferred the receiver is sent the notification about the same.
>
> With this patch I could see significant improvement in performance for
> simple queries,

I've spent some time looking into this, and I'm not quite convinced this
is the right approach.  Let me start by describing where I see the
current performance problems around gather stemming from.

The observations here are made using
select * from t where i < 30000000 offset 29999999 - 1;
with Rafia's data. That avoids slowdowns on the leader due to too many
rows printed out. Sometimes I'll also use
SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;
on tpch data to show the effects on wider tables.

The precise query doesn't really matter, the observations here are more
general, I hope.

1) nodeGather.c re-projects every row from workers. As far as I can tell  that's now always exactly the same targetlist
asit's coming from the  worker. Projection capability was added in 8538a6307049590 (without  checking whether it's
neededafaict), but I think it in turn often  obsoleted by 992b5ba30dcafdc222341505b072a6b009b248a7.  My  measurement
showsthat removing the projection yields quite massive  speedups in queries like yours, which is not too surprising.
 
  I suspect this just needs a tlist_matches_tupdesc check + an if  around ExecProject(). And a test, right now tests
passunless  force_parallel_mode is used even if just commenting out the  projection unconditionally.
 
  before commenting out nodeGather projection:
  rafia time: 8283.583  rafia profile:
+   30.62%  postgres  postgres             [.] shm_mq_receive
+   18.49%  postgres  postgres             [.] s_lock
+   10.08%  postgres  postgres             [.] SetLatch
-    7.02%  postgres  postgres             [.] slot_deform_tuple  - slot_deform_tuple     - 88.01% slot_getsomeattrs
     ExecInterpExpr          ExecGather          ExecLimit
 
  lineitem time: 8448.468  lineitem profile:
+   24.63%  postgres  postgres             [.] slot_deform_tuple
+   24.43%  postgres  postgres             [.] shm_mq_receive
+   17.36%  postgres  postgres             [.] ExecInterpExpr
+    7.41%  postgres  postgres             [.] s_lock
+    5.73%  postgres  postgres             [.] SetLatch

after:  rafia time: 6660.224  rafia profile:
+   36.77%  postgres  postgres              [.] shm_mq_receive
+   19.33%  postgres  postgres              [.] s_lock
+   13.14%  postgres  postgres              [.] SetLatch
+    9.22%  postgres  postgres              [.] AllocSetReset
+    4.27%  postgres  postgres              [.] ExecGather
+    2.79%  postgres  postgres              [.] AllocSetAlloc
  lineitem time: 4507.416  lineitem profile:
+   34.81%  postgres  postgres            [.] shm_mq_receive
+   15.45%  postgres  postgres            [.] s_lock
+   13.38%  postgres  postgres            [.] SetLatch
+    9.87%  postgres  postgres            [.] AllocSetReset
+    5.82%  postgres  postgres            [.] ExecGather
  as quite clearly visible, avoiding the projection yields some major  speedups.
  The following analysis here has the projection removed.

2) The spinlocks both on the the sending and receiving side a quite hot:
  rafia query leader:
+   36.16%  postgres  postgres            [.] shm_mq_receive
+   19.49%  postgres  postgres            [.] s_lock
+   13.24%  postgres  postgres            [.] SetLatch
  The presence of s_lock shows us that we're clearly often contending  on spinlocks, given that's the slow-path for
SpinLockAcquire().In  shm_mq_receive the instruction profile shows:
 
      │               SpinLockAcquire(&mq->mq_mutex);      │1 5ab:   mov    $0xa9b580,%ecx      │         mov
$0x4a4,%edx     │         mov    $0xa9b538,%esi      │         mov    %r15,%rdi      │       → callq  s_lock      │
 ↑ jmpq   2a1      │       tas():      │1 5c7:   mov    $0x1,%eax32.83 │         lock   xchg %al,(%r15)      │
shm_mq_inc_bytes_read():     │               SpinLockAcquire(&mq->mq_mutex);
 
and 0.01 │         pop    %r15 0.04 │       ← retq      │         nop      │       tas():      │1 338:   mov
$0x1,%eax17.59│         lock   xchg %al,(%r15)      │       shm_mq_get_bytes_written():      │
SpinLockAcquire(&mq->mq_mutex);0.05 │         test   %al,%al 0.01 │       ↓ jne    448      │               v =
mq->mq_bytes_written;
   rafia query worker:
+   33.00%  postgres  postgres            [.] shm_mq_send_bytes
+    9.90%  postgres  postgres            [.] s_lock
+    7.74%  postgres  postgres            [.] shm_mq_send
+    5.40%  postgres  postgres            [.] ExecInterpExpr
+    5.34%  postgres  postgres            [.] SetLatch
  Again, we have strong indicators for a lot of spinlock  contention. The instruction profiles show the same;
  shm_mq_send_bytes      │                               shm_mq_inc_bytes_written(mq, MAXALIGN(sendnow));      │
and    $0xfffffffffffffff8,%r15      │       tas(): 0.08 │         mov    %ebp,%eax31.07 │         lock   xchg
%al,(%r14)     │       shm_mq_inc_bytes_written():      │        * Increment the number of bytes written.      │
*/
   and
      │3  98:   cmp    %r13,%rbx 0.70 │       ↓ jae    430      │       tas(): 0.12 │1  a1:   mov    %ebp,%eax28.53 │
     lock   xchg %al,(%r14)      │       shm_mq_get_bytes_read():      │               SpinLockAcquire(&mq->mq_mutex);
   │         test   %al,%al      │       ↓ jne    298      │               v = mq->mq_bytes_read;
 
   shm_mq_send:      │       tas():50.73 │         lock   xchg %al,0x0(%r13)      │       shm_mq_notify_receiver():
│       shm_mq_notify_receiver(volatile shm_mq *mq)      │       {      │               PGPROC     *receiver;      │
          bool            detached;
 

  My interpretation here is that it's not just the effect of the  locking causing the slowdown, but to a significant
degreethe effect  of the implied bus lock.
 
  To test that theory, here are the timings for  1) spinlocks present     time: 6593.045  2) spinlocks acuisition
replacedby *full* memory barriers, which on x86 is a lock; addl $0,0(%%rsp)     time: 5159.306  3) spinlocks replaced
byread/write barriers as appropriate.     time: 4687.610
 
  By my understanding of shm_mq.c's logic, something like 3) aught to  be doable in a correct manner. There should be,
innormal  circumstances, only be one end modifying each of the protected  variables. Doing so instead of using full
blockatomics with locked  instructions is very likely to yield considerably better performance.
 
  The top-level profile after 3 is:
  leader:
+   25.89%  postgres  postgres          [.] shm_mq_receive
+   24.78%  postgres  postgres          [.] SetLatch
+   14.63%  postgres  postgres          [.] AllocSetReset
+    7.31%  postgres  postgres          [.] ExecGather
  worker:
+   14.02%  postgres  postgres            [.] ExecInterpExpr
+   11.63%  postgres  postgres            [.] shm_mq_send_bytes
+   11.25%  postgres  postgres            [.] heap_getnext
+    6.78%  postgres  postgres            [.] SetLatch
+    6.38%  postgres  postgres            [.] slot_deform_tuple
  still a lot of cycles in the queue code, but proportionally less.

4) Modulo computations in shm_mq are expensive:
      │       shm_mq_send_bytes():      │                               Size            offset = mq->mq_bytes_written %
(uint64)ringsize; 0.12 │1  70:   xor    %edx,%edx      │                               Size            sendnow =
Min(available,ringsize - offset);      │         mov    %r12,%rsi      │                               Size
offset= mq->mq_bytes_written % (uint64) ringsize;43.75 │         div    %r12      │
memcpy(&mq->mq_ring[mq->mq_ring_offset+ offset], 7.23 │         movzbl 0x31(%r15),%eax
 

      │       shm_mq_receive_bytes():      │                       used = written - mq->mq_bytes_read; 1.17 │
sub   %rax,%rcx      │                       offset = mq->mq_bytes_read % (uint64) ringsize;18.49 │         div    %rbp
    │         mov    %rdx,%rdi
 

  that we end up with a full blown div makes sense - the compiler can't  know anything about ringsize, therefore it
can'toptimize to a mask.  I think we should force the size of the ringbuffer to be a power of  two, and use a maks
insteadof a size for the buffer.
 

5) There's a *lot* of latch interactions. The biggest issue actually is  the memory barrier implied by a SetLatch,
waitingfor the latch  barely shows up.
 
  from 4) above:
  leader:
+   25.89%  postgres  postgres          [.] shm_mq_receive
+   24.78%  postgres  postgres          [.] SetLatch
+   14.63%  postgres  postgres          [.] AllocSetReset
+    7.31%  postgres  postgres          [.] ExecGather

      │      0000000000781b10 <SetLatch>:      │      SetLatch():      │              /*      │               * The
memorybarrier has to be placed here to ensure that any flag      │               * variables possibly changed by this
processhave been flushed to main      │               * memory, before we check/set is_set.      │               */
│              pg_memory_barrier();77.43 │        lock   addl   $0x0,(%rsp)      │      │              /* Quick exit if
alreadyset */      │              if (latch->is_set) 0.12 │        mov    (%rdi),%eax
 

  Commenting out the memory barrier - which is NOT CORRECT - improves  timing:  before: 4675.626  after: 4125.587
  The correct fix obviously is not to break latch correctness. I think  the big problem here is that we perform a
SetLatch()for every read  from the latch.
 
  I think we should  a) introduce a batch variant for receiving like:

extern shm_mq_result shm_mq_receivev(shm_mq_handle *mqh,                                    shm_mq_iovec *iov, int
*iovcnt,                                   bool nowait)
 
     which then only does the SetLatch() at the end. And maybe if it     wrapped around.
  b) Use shm_mq_sendv in tqueue.c by batching up insertions into the     queue whenever it's not empty when a tuple is
ready.
  I've not benchmarked this, but I'm pretty certain that the benefits  isn't just going to be reduced cost of
SetLatch()calls, but also  increased performance due to fewer context switches
 

6) I've observed, using strace, debug outputs with timings, and top with  a short interval, that quite often only one
backendhas sufficient  work, while other backends are largely idle.
 
  I think the problem here is that the "anti round robin" provisions from  bc7fcab5e36b9597857, while much better than
theprevious state, have  swung a bit too far into the other direction. Especially if we were  to introduce batching as
Isuggest in 5), but even without, this  leads to back-fort on half-empty queues between the gatherstate->nextreader
backend,and the leader.
 
  I'm not 100% certain what the right fix here is.
  One fairly drastic solution would be to move away from a  single-produce-single-consumer style, per worker, queue, to
aglobal  queue. That'd avoid fairness issues between the individual workers,  at the price of potential added
contention.One disadvantage is that  such a combined queue approach is not easily applicable for gather  merge.
 
  One less drastic approach would be to move to try to drain the queue  fully in one batch, and then move to the next
queue.That'd avoid  triggering a small wakeups for each individual tuple, as one  currently would get without the
'stickyness'.
  It might also be a good idea to use a more directed form of wakeups,  e.g. using a per-worker latch + a wait event
set,to avoid iterating  over all workers.
 


Unfortunately the patch's "local worker queue" concept seems, to me,
like it's not quite addressing the structural issues, instead opting to
address them by adding another layer of queuing. I suspect that if we'd
go for the above solutions there'd be only very small benefit in
implementing such per-worker local queues.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
Hi,

On 2017-10-17 14:39:57 -0700, Andres Freund wrote:
> I've spent some time looking into this, and I'm not quite convinced this
> is the right approach.  Let me start by describing where I see the
> current performance problems around gather stemming from.

One further approach to several of these issues could also be to change
things a bit more radically:

Instead of the current shm_mq + tqueue.c, have a drastically simpler
queue, that just stores fixed width dsa_pointers. Dealing with that
queue will be quite a bit faster. In that queue one would store dsa.c
managed pointers to tuples.

One thing that makes that attractive is that that'd move a bunch of
copying in the leader process solely to the worker processes, because
the leader could just convert the dsa_pointer into a local pointer and
hand that upwards the execution tree.

We'd possibly need some halfway clever way to reuse dsa allocations, but
that doesn't seem impossible.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Tue, Oct 17, 2017 at 5:39 PM, Andres Freund <andres@anarazel.de> wrote:
> The precise query doesn't really matter, the observations here are more
> general, I hope.
>
> 1) nodeGather.c re-projects every row from workers. As far as I can tell
>    that's now always exactly the same targetlist as it's coming from the
>    worker. Projection capability was added in 8538a6307049590 (without
>    checking whether it's needed afaict), but I think it in turn often
>    obsoleted by 992b5ba30dcafdc222341505b072a6b009b248a7.  My
>    measurement shows that removing the projection yields quite massive
>    speedups in queries like yours, which is not too surprising.

That seems like an easy and worthwhile optimization.

>    I suspect this just needs a tlist_matches_tupdesc check + an if
>    around ExecProject(). And a test, right now tests pass unless
>    force_parallel_mode is used even if just commenting out the
>    projection unconditionally.

So, for this to fail, we'd need a query that uses parallelism but
where the target list contains a parallel-restricted function.  Also,
the function should really be such that we'll reliably get a failure
rather than only with some small probability.  I'm not quite sure how
to put together such a test case, but there's probably some way.

> 2) The spinlocks both on the the sending and receiving side a quite hot:
>
>    rafia query leader:
> +   36.16%  postgres  postgres            [.] shm_mq_receive
> +   19.49%  postgres  postgres            [.] s_lock
> +   13.24%  postgres  postgres            [.] SetLatch
>
>    To test that theory, here are the timings for
>    1) spinlocks present
>       time: 6593.045
>    2) spinlocks acuisition replaced by *full* memory barriers, which on x86 is a lock; addl $0,0(%%rsp)
>       time: 5159.306
>    3) spinlocks replaced by read/write barriers as appropriate.
>       time: 4687.610
>
>    By my understanding of shm_mq.c's logic, something like 3) aught to
>    be doable in a correct manner. There should be, in normal
>    circumstances, only be one end modifying each of the protected
>    variables. Doing so instead of using full block atomics with locked
>    instructions is very likely to yield considerably better performance.

I think the sticking point here will be the handling of the
mq_detached flag.  I feel like I fixed a bug at some point where this
had to be checked or set under the lock at the same time we were
checking or setting mq_bytes_read and/or mq_bytes_written, but I don't
remember the details.  I like the idea, though.

Not sure what happened to #3 on your list... you went from #2 to #4.

> 4) Modulo computations in shm_mq are expensive:
>
>    that we end up with a full blown div makes sense - the compiler can't
>    know anything about ringsize, therefore it can't optimize to a mask.
>    I think we should force the size of the ringbuffer to be a power of
>    two, and use a maks instead of a size for the buffer.

This seems like it would require some redesign.  Right now we let the
caller choose any size they want and subtract off our header size to
find the actual queue size.  We can waste up to MAXALIGN-1 bytes, but
that's not much.  This would waste up to half the bytes provided,
which is probably not cool.

> 5) There's a *lot* of latch interactions. The biggest issue actually is
>    the memory barrier implied by a SetLatch, waiting for the latch
>    barely shows up.
>
>    Commenting out the memory barrier - which is NOT CORRECT - improves
>    timing:
>    before: 4675.626
>    after: 4125.587
>
>    The correct fix obviously is not to break latch correctness. I think
>    the big problem here is that we perform a SetLatch() for every read
>    from the latch.

I think it's a little bit of an overstatement to say that commenting
out the memory barrier is not correct.  When we added that code, we
removed this comment:

- * Presently, when using a shared latch for interprocess signalling, the
- * flag variable(s) set by senders and inspected by the wait loop must
- * be protected by spinlocks or LWLocks, else it is possible to miss events
- * on machines with weak memory ordering (such as PPC).  This restriction
- * will be lifted in future by inserting suitable memory barriers into
- * SetLatch and ResetLatch.

It seems to me that in any case where the data is protected by an
LWLock, the barriers we've added to SetLatch and ResetLatch are
redundant.  I'm not sure if that's entirely true in the spinlock case,
because S_UNLOCK() is only documented to have release semantics, so
maybe the load of latch->is_set could be speculated before the lock is
dropped.  But I do wonder if we're just multiplying barriers endlessly
instead of trying to think of ways to minimize them (e.g. have a
variant of SpinLockRelease that acts as a full barrier instead of a
release barrier, and then avoid a second barrier when checking the
latch state).

All that having been said, a batch variant for reading tuples in bulk
might make sense.  I think when I originally wrote this code I was
thinking that one process might be filling the queue while another
process was draining it, and that it might therefore be important to
free up space as early as possible.  But maybe that's not a very good
intuition.

>    b) Use shm_mq_sendv in tqueue.c by batching up insertions into the
>       queue whenever it's not empty when a tuple is ready.

Batching them with what?  I hate to postpone sending tuples we've got;
that sounds nice in the case where we're sending tons of tuples at
high speed, but there might be other cases where it makes the leader
wait.

> 6) I've observed, using strace, debug outputs with timings, and top with
>    a short interval, that quite often only one backend has sufficient
>    work, while other backends are largely idle.

Doesn't that just mean we're bad at choosing how man workers to use?
If one worker can't outrun the leader, it's better to have the other
workers sleep and keep one the one lucky worker on CPU than to keep
context switching.  Or so I would assume.

>    One fairly drastic solution would be to move away from a
>    single-produce-single-consumer style, per worker, queue, to a global
>    queue. That'd avoid fairness issues between the individual workers,
>    at the price of potential added contention. One disadvantage is that
>    such a combined queue approach is not easily applicable for gather
>    merge.

It might also lead to more contention.

>    One less drastic approach would be to move to try to drain the queue
>    fully in one batch, and then move to the next queue. That'd avoid
>    triggering a small wakeups for each individual tuple, as one
>    currently would get without the 'stickyness'.

I don't know if that is better but it seems worth a try.

>    It might also be a good idea to use a more directed form of wakeups,
>    e.g. using a per-worker latch + a wait event set, to avoid iterating
>    over all workers.

I don't follow.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
Hi,

On 2017-10-18 15:46:39 -0400, Robert Haas wrote:
> > 2) The spinlocks both on the the sending and receiving side a quite hot:
> >
> >    rafia query leader:
> > +   36.16%  postgres  postgres            [.] shm_mq_receive
> > +   19.49%  postgres  postgres            [.] s_lock
> > +   13.24%  postgres  postgres            [.] SetLatch
> >
> >    To test that theory, here are the timings for
> >    1) spinlocks present
> >       time: 6593.045
> >    2) spinlocks acuisition replaced by *full* memory barriers, which on x86 is a lock; addl $0,0(%%rsp)
> >       time: 5159.306
> >    3) spinlocks replaced by read/write barriers as appropriate.
> >       time: 4687.610
> >
> >    By my understanding of shm_mq.c's logic, something like 3) aught to
> >    be doable in a correct manner. There should be, in normal
> >    circumstances, only be one end modifying each of the protected
> >    variables. Doing so instead of using full block atomics with locked
> >    instructions is very likely to yield considerably better performance.
> 
> I think the sticking point here will be the handling of the
> mq_detached flag.  I feel like I fixed a bug at some point where this
> had to be checked or set under the lock at the same time we were
> checking or setting mq_bytes_read and/or mq_bytes_written, but I don't
> remember the details.  I like the idea, though.

Hm. I'm a bit confused/surprised by that. What'd be the worst that can
happen if we don't immediately detect that the other side is detached?
At least if we only do so in the non-blocking paths, the worst that
seems that could happen is that we read/write at most one superflous
queue entry, rather than reporting an error? Or is the concern that
detaching might be detected *too early*, before reading the last entry
from a queue?


> Not sure what happened to #3 on your list... you went from #2 to #4.

Threes are boring.


> > 4) Modulo computations in shm_mq are expensive:
> >
> >    that we end up with a full blown div makes sense - the compiler can't
> >    know anything about ringsize, therefore it can't optimize to a mask.
> >    I think we should force the size of the ringbuffer to be a power of
> >    two, and use a maks instead of a size for the buffer.
> 
> This seems like it would require some redesign.  Right now we let the
> caller choose any size they want and subtract off our header size to
> find the actual queue size.  We can waste up to MAXALIGN-1 bytes, but
> that's not much.  This would waste up to half the bytes provided,
> which is probably not cool.

Yea, I think it'd require a shm_mq_estimate_size(Size queuesize), that
returns the next power-of-two queuesize + overhead.


> > 5) There's a *lot* of latch interactions. The biggest issue actually is
> >    the memory barrier implied by a SetLatch, waiting for the latch
> >    barely shows up.
> >
> >    Commenting out the memory barrier - which is NOT CORRECT - improves
> >    timing:
> >    before: 4675.626
> >    after: 4125.587
> >
> >    The correct fix obviously is not to break latch correctness. I think
> >    the big problem here is that we perform a SetLatch() for every read
> >    from the latch.
> 
> I think it's a little bit of an overstatement to say that commenting
> out the memory barrier is not correct.  When we added that code, we
> removed this comment:
> 
> - * Presently, when using a shared latch for interprocess signalling, the
> - * flag variable(s) set by senders and inspected by the wait loop must
> - * be protected by spinlocks or LWLocks, else it is possible to miss events
> - * on machines with weak memory ordering (such as PPC).  This restriction
> - * will be lifted in future by inserting suitable memory barriers into
> - * SetLatch and ResetLatch.
> 
> It seems to me that in any case where the data is protected by an
> LWLock, the barriers we've added to SetLatch and ResetLatch are
> redundant. I'm not sure if that's entirely true in the spinlock case,
> because S_UNLOCK() is only documented to have release semantics, so
> maybe the load of latch->is_set could be speculated before the lock is
> dropped.  But I do wonder if we're just multiplying barriers endlessly
> instead of trying to think of ways to minimize them (e.g. have a
> variant of SpinLockRelease that acts as a full barrier instead of a
> release barrier, and then avoid a second barrier when checking the
> latch state).

I'm not convinced by this. Imo the multiplying largely comes from
superflous actions, like the per-entry SetLatch calls here, rather than
per batch.

After all I'd benchmarked this *after* an experimental conversion of
shm_mq to not use spinlocks - so there's actually no external barrier
providing these guarantees that could be combined with SetLatch()'s
barrier.

Presumably part of the cost here comes from the fact that the barriers
actually do have an influence over the ordering.


> All that having been said, a batch variant for reading tuples in bulk
> might make sense.  I think when I originally wrote this code I was
> thinking that one process might be filling the queue while another
> process was draining it, and that it might therefore be important to
> free up space as early as possible.  But maybe that's not a very good
> intuition.

I think that's a sensible goal, but I don't think that has to mean that
the draining has to happen after every entry. If you'd e.g. have a
shm_mq_receivev() with 16 iovecs, that'd commonly be only part of a
single tqueue mq (unless your tuples are > 4k).  I'll note that afaict
shm_mq_sendv() already batches its SetLatch() calls.

I think one important thing a batched drain can avoid is that a worker
awakes to just put one new tuple into the queue and then sleep
again. That's kinda expensive.


> >    b) Use shm_mq_sendv in tqueue.c by batching up insertions into the
> >       queue whenever it's not empty when a tuple is ready.
> 
> Batching them with what?  I hate to postpone sending tuples we've got;
> that sounds nice in the case where we're sending tons of tuples at
> high speed, but there might be other cases where it makes the leader
> wait.

Yea, that'd need some smarts. How about doing something like batching up
locally as long as the queue contains less than one average sized batch?


> > 6) I've observed, using strace, debug outputs with timings, and top with
> >    a short interval, that quite often only one backend has sufficient
> >    work, while other backends are largely idle.
> 
> Doesn't that just mean we're bad at choosing how man workers to use?
> If one worker can't outrun the leader, it's better to have the other
> workers sleep and keep one the one lucky worker on CPU than to keep
> context switching.  Or so I would assume.

No, I don't think that's necesarily true. If that worker's queue is full
every-time, then yes. But I think a common scenario is that the
"current" worker only has a small portion of the queue filled. Draining
that immediately just leads to increased cacheline bouncing.

I'd not previously thought about this, but won't staying sticky to the
current worker potentially cause pins on individual tuples be held for a
potentially long time by workers not making any progress?


> >    It might also be a good idea to use a more directed form of wakeups,
> >    e.g. using a per-worker latch + a wait event set, to avoid iterating
> >    over all workers.
> 
> I don't follow.

Well, right now we're busily checking each worker's queue. That's fine
with a handful of workers, but starts to become not that cheap pretty
soon afterwards. In the surely common case where the workers are the
bottleneck (because that's when parallelism is worthwhile), we'll check
each worker's queue once one of them returned a single tuple. It'd not
be a stupid idea to have a per-worker latch and wait for the latches of
all workers at once. Then targetedly drain the queues of the workers
that WaitEventSetWait(nevents = nworkers) signalled as ready.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Wed, Oct 18, 2017 at 4:30 PM, Andres Freund <andres@anarazel.de> wrote:
> Hm. I'm a bit confused/surprised by that. What'd be the worst that can
> happen if we don't immediately detect that the other side is detached?
> At least if we only do so in the non-blocking paths, the worst that
> seems that could happen is that we read/write at most one superflous
> queue entry, rather than reporting an error? Or is the concern that
> detaching might be detected *too early*, before reading the last entry
> from a queue?

Detaching too early is definitely a problem.  A worker is allowed to
start up, write all of its results into the queue, and then detach
without waiting for the leader to read those results.  (Reading
messages that weren't really written would be a problem too, of
course.)

> I'm not convinced by this. Imo the multiplying largely comes from
> superflous actions, like the per-entry SetLatch calls here, rather than
> per batch.
>
> After all I'd benchmarked this *after* an experimental conversion of
> shm_mq to not use spinlocks - so there's actually no external barrier
> providing these guarantees that could be combined with SetLatch()'s
> barrier.

OK.

>> All that having been said, a batch variant for reading tuples in bulk
>> might make sense.  I think when I originally wrote this code I was
>> thinking that one process might be filling the queue while another
>> process was draining it, and that it might therefore be important to
>> free up space as early as possible.  But maybe that's not a very good
>> intuition.
>
> I think that's a sensible goal, but I don't think that has to mean that
> the draining has to happen after every entry. If you'd e.g. have a
> shm_mq_receivev() with 16 iovecs, that'd commonly be only part of a
> single tqueue mq (unless your tuples are > 4k).  I'll note that afaict
> shm_mq_sendv() already batches its SetLatch() calls.

But that's a little different -- shm_mq_sendv() sends one message
collected from multiple places.  There's no more reason for it to wake
up the receiver before the whole message is written that there would
be for shm_mq_send(); it's the same problem.

> I think one important thing a batched drain can avoid is that a worker
> awakes to just put one new tuple into the queue and then sleep
> again. That's kinda expensive.

Yes.  Or - part of a tuple, which is worse.

>> >    b) Use shm_mq_sendv in tqueue.c by batching up insertions into the
>> >       queue whenever it's not empty when a tuple is ready.
>>
>> Batching them with what?  I hate to postpone sending tuples we've got;
>> that sounds nice in the case where we're sending tons of tuples at
>> high speed, but there might be other cases where it makes the leader
>> wait.
>
> Yea, that'd need some smarts. How about doing something like batching up
> locally as long as the queue contains less than one average sized batch?

I don't follow.

> No, I don't think that's necesarily true. If that worker's queue is full
> every-time, then yes. But I think a common scenario is that the
> "current" worker only has a small portion of the queue filled. Draining
> that immediately just leads to increased cacheline bouncing.

Hmm, OK.

> I'd not previously thought about this, but won't staying sticky to the
> current worker potentially cause pins on individual tuples be held for a
> potentially long time by workers not making any progress?

Yes.

>> >    It might also be a good idea to use a more directed form of wakeups,
>> >    e.g. using a per-worker latch + a wait event set, to avoid iterating
>> >    over all workers.
>>
>> I don't follow.
>
> Well, right now we're busily checking each worker's queue. That's fine
> with a handful of workers, but starts to become not that cheap pretty
> soon afterwards. In the surely common case where the workers are the
> bottleneck (because that's when parallelism is worthwhile), we'll check
> each worker's queue once one of them returned a single tuple. It'd not
> be a stupid idea to have a per-worker latch and wait for the latches of
> all workers at once. Then targetedly drain the queues of the workers
> that WaitEventSetWait(nevents = nworkers) signalled as ready.

Hmm, interesting.  But we can't completely ignore the process latch
either, since among other things a worker erroring out and propagating
the error to the leader relies on that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Wed, Oct 18, 2017 at 3:09 AM, Andres Freund <andres@anarazel.de> wrote:
> Hi Everyone,
>
> On 2017-05-19 17:25:38 +0530, Rafia Sabih wrote:
>> While analysing the performance of TPC-H queries for the newly developed
>> parallel-operators, viz, parallel index, bitmap heap scan, etc. we noticed
>> that the time taken by gather node is significant. On investigation, as per
>> the current method it copies each tuple to the shared queue and notifies
>> the receiver. Since, this copying is done in shared queue, a lot of locking
>> and latching overhead is there.
>>
>> So, in this POC patch I tried to copy all the tuples in a local queue thus
>> avoiding all the locks and latches. Once, the local queue is filled as per
>> it's capacity, tuples are transferred to the shared queue. Once, all the
>> tuples are transferred the receiver is sent the notification about the same.
>>
>> With this patch I could see significant improvement in performance for
>> simple queries,
>
> I've spent some time looking into this, and I'm not quite convinced this
> is the right approach.
>

As per my understanding, the patch in this thread is dead (not
required) after the patch posted by Rafia in thread "Effect of
changing the value for PARALLEL_TUPLE_QUEUE_SIZE" [1].  There seem to
be two related problems in this area, first is shm queue communication
overhead and second is workers started to stall when shm queue gets
full.  We can observe the first problem in simple queries that use
gather and second in gather merge kind of scenarios.  We are trying to
resolve both the problems with the patch posted in another thread.  I
think there is some similarity with the patch posted on this thread,
but it is different.  I have mentioned something similar up thread as
well.


>  Let me start by describing where I see the
> current performance problems around gather stemming from.
>
> The observations here are made using
> select * from t where i < 30000000 offset 29999999 - 1;
> with Rafia's data. That avoids slowdowns on the leader due to too many
> rows printed out. Sometimes I'll also use
> SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;
> on tpch data to show the effects on wider tables.
>
> The precise query doesn't really matter, the observations here are more
> general, I hope.
>
> 1) nodeGather.c re-projects every row from workers. As far as I can tell
>    that's now always exactly the same targetlist as it's coming from the
>    worker. Projection capability was added in 8538a6307049590 (without
>    checking whether it's needed afaict), but I think it in turn often
>    obsoleted by 992b5ba30dcafdc222341505b072a6b009b248a7.  My
>    measurement shows that removing the projection yields quite massive
>    speedups in queries like yours, which is not too surprising.
>
>    I suspect this just needs a tlist_matches_tupdesc check + an if
>    around ExecProject(). And a test, right now tests pass unless
>    force_parallel_mode is used even if just commenting out the
>    projection unconditionally.
>

+1.  I think we should something to avoid this.

>
>    Commenting out the memory barrier - which is NOT CORRECT - improves
>    timing:
>    before: 4675.626
>    after: 4125.587
>
>    The correct fix obviously is not to break latch correctness. I think
>    the big problem here is that we perform a SetLatch() for every read
>    from the latch.
>
>    I think we should
>    a) introduce a batch variant for receiving like:
>
> extern shm_mq_result shm_mq_receivev(shm_mq_handle *mqh,
>                                      shm_mq_iovec *iov, int *iovcnt,
>                                      bool nowait)
>
>       which then only does the SetLatch() at the end. And maybe if it
>       wrapped around.
>
>    b) Use shm_mq_sendv in tqueue.c by batching up insertions into the
>       queue whenever it's not empty when a tuple is ready.
>

I think the patch posted in another thread has tried to achieve such a
batching by using local queues, maybe we should try some other way.

>
> Unfortunately the patch's "local worker queue" concept seems, to me,
> like it's not quite addressing the structural issues, instead opting to
> address them by adding another layer of queuing.
>

That is done to use batching the tuples while sending them.  Sure, we
can do some of the other things as well, but I think the main
advantage is from batching the tuples in a smart way while sending
them and once that is done, we might not need many of the other
optimizations.


[1] - https://www.postgresql.org/message-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS%3DfHiBJmbSOF74aBQ%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Thu, Oct 19, 2017 at 1:16 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Oct 17, 2017 at 5:39 PM, Andres Freund <andres@anarazel.de> wrote:
>
>>    b) Use shm_mq_sendv in tqueue.c by batching up insertions into the
>>       queue whenever it's not empty when a tuple is ready.
>
> Batching them with what?  I hate to postpone sending tuples we've got;
> that sounds nice in the case where we're sending tons of tuples at
> high speed, but there might be other cases where it makes the leader
> wait.
>

I think Rafia's latest patch on the thread [1] has done something
similar where the tuples are sent till there is a space in shared
memory queue and then turn to batching the tuples using local queues.


[1] - https://www.postgresql.org/message-id/CAOGQiiNiMhq5Pg3LiYxjfi2B9eAQ_q5YjS%3DfHiBJmbSOF74aBQ%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Wed, Oct 18, 2017 at 3:09 AM, Andres Freund <andres@anarazel.de> wrote:
> 2) The spinlocks both on the the sending and receiving side a quite hot:
>
>    rafia query leader:
> +   36.16%  postgres  postgres            [.] shm_mq_receive
> +   19.49%  postgres  postgres            [.] s_lock
> +   13.24%  postgres  postgres            [.] SetLatch

Here's a patch which, as per an off-list discussion between Andres,
Amit, and myself, removes the use of the spinlock for most
send/receive operations in favor of memory barriers and the atomics
support for 8-byte reads and writes.  I tested with a pgbench -i -s
300 database with pgbench_accounts_pkey dropped and
max_parallel_workers_per_gather boosted to 10.  I used this query:

select aid, count(*) from pgbench_accounts group by 1 having count(*) > 1;

which produces this plan:

 Finalize GroupAggregate  (cost=1235865.51..5569468.75 rows=10000000 width=12)
   Group Key: aid
   Filter: (count(*) > 1)
   ->  Gather Merge  (cost=1235865.51..4969468.75 rows=30000000 width=12)
         Workers Planned: 6
         ->  Partial GroupAggregate  (cost=1234865.42..1322365.42
rows=5000000 width=12)
               Group Key: aid
               ->  Sort  (cost=1234865.42..1247365.42 rows=5000000 width=4)
                     Sort Key: aid
                     ->  Parallel Seq Scan on pgbench_accounts
(cost=0.00..541804.00 rows=5000000 width=4)
(10 rows)

On hydra (PPC), these changes didn't help much.  Timings:

master: 29605.582, 29753.417, 30160.485
patch: 28218.396, 27986.951, 26465.584

That's about a 5-6% improvement.  On my MacBook, though, the
improvement was quite a bit more:

master: 21436.745, 20978.355, 19918.617
patch: 15896.573, 15880.652, 15967.176

Median-to-median, that's about a 24% improvement.

Any reviews appreciated.

Thanks,

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
Hi,

On 2017-11-04 16:38:31 +0530, Robert Haas wrote:
> On hydra (PPC), these changes didn't help much.  Timings:
>
> master: 29605.582, 29753.417, 30160.485
> patch: 28218.396, 27986.951, 26465.584
>
> That's about a 5-6% improvement.  On my MacBook, though, the
> improvement was quite a bit more:

Hm. I wonder why that is. Random unverified theories (this plane doesn't
have power supplies for us mere mortals in coach, therefore I'm not
going to run benchmarks):

- Due to the lower per-core performance the leader backend is so bottlenecked that there's just not a whole lot of
contention.Therefore removing the lock doesn't help much. That might be a bit different if the redundant projection is
removed.
- IO performance on hydra is notoriously bad so there might just not be enough data available for workers to process
rowsfast enough to cause contention.
 

> master: 21436.745, 20978.355, 19918.617
> patch: 15896.573, 15880.652, 15967.176
>
> Median-to-median, that's about a 24% improvement.

Neat!


> - * mq_detached can be set by either the sender or the receiver, so the mutex
> - * must be held to read or write it.  Memory barriers could be used here as
> - * well, if needed.
> + * mq_bytes_read and mq_bytes_written are not protected by the mutex.  Instead,
> + * they are written atomically using 8 byte loads and stores.  Memory barriers
> + * must be carefully used to synchronize reads and writes of these values with
> + * reads and writes of the actual data in mq_ring.

Maybe mention that there's a fallback for ancient platforms?


> @@ -900,15 +921,12 @@ shm_mq_send_bytes(shm_mq_handle *mqh, Size nbytes, const void *data,
>          }
>          else if (available == 0)
>          {
> -            shm_mq_result res;
> -
> -            /* Let the receiver know that we need them to read some data. */
> -            res = shm_mq_notify_receiver(mq);
> -            if (res != SHM_MQ_SUCCESS)
> -            {
> -                *bytes_written = sent;
> -                return res;
> -            }
> +            /*
> +             * Since mq->mqh_counterparty_attached is known to be true at this
> +             * point, mq_receiver has been set, and it can't change once set.
> +             * Therefore, we can read it without acquiring the spinlock.
> +             */
> +            SetLatch(&mq->mq_receiver->procLatch);

Might not hurt to assert mqh_counterparty_attached, just for slightly
easier debugging.

> @@ -983,19 +1009,27 @@ shm_mq_receive_bytes(shm_mq *mq, Size bytes_needed, bool nowait,
>      for (;;)
>      {
>          Size        offset;
> -        bool        detached;
> +        uint64        read;
>
>          /* Get bytes written, so we can compute what's available to read. */
> -        written = shm_mq_get_bytes_written(mq, &detached);
> -        used = written - mq->mq_bytes_read;
> +        written = pg_atomic_read_u64(&mq->mq_bytes_written);
> +        read = pg_atomic_read_u64(&mq->mq_bytes_read);

Theoretically we don't actually need to re-read this from shared memory,
we could just have the information in the local memory too. Right?
Doubtful however that it's important enough to bother given that we've
to move the cacheline for `mq_bytes_written` anyway, and will later also
dirty it to *update* `mq_bytes_read`.  Similarly on the write side.


> -/*
>   * Increment the number of bytes read.
>   */
>  static void
> @@ -1157,63 +1164,51 @@ shm_mq_inc_bytes_read(volatile shm_mq *mq, Size n)
>  {
>      PGPROC       *sender;
>
> -    SpinLockAcquire(&mq->mq_mutex);
> -    mq->mq_bytes_read += n;
> +    /*
> +     * Separate prior reads of mq_ring from the increment of mq_bytes_read
> +     * which follows.  Pairs with the full barrier in shm_mq_send_bytes().
> +     * We only need a read barrier here because the increment of mq_bytes_read
> +     * is actually a read followed by a dependent write.
> +     */
> +    pg_read_barrier();
> +
> +    /*
> +     * There's no need to use pg_atomic_fetch_add_u64 here, because nobody
> +     * else can be changing this value.  This method avoids taking the bus
> +     * lock unnecessarily.
> +     */

s/the bus lock/a bus lock/?  Might also be worth rephrasing away from
bus locks - there's a lot of different ways atomics are implemented.

>  /*
> - * Get the number of bytes written.  The sender need not use this to access
> - * the count of bytes written, but the receiver must.
> - */
> -static uint64
> -shm_mq_get_bytes_written(volatile shm_mq *mq, bool *detached)
> -{
> -    uint64        v;
> -
> -    SpinLockAcquire(&mq->mq_mutex);
> -    v = mq->mq_bytes_written;
> -    *detached = mq->mq_detached;
> -    SpinLockRelease(&mq->mq_mutex);
> -
> -    return v;
> -}

You reference this function in a comment elsewhere:

> +    /*
> +     * Separate prior reads of mq_ring from the write of mq_bytes_written
> +     * which we're about to do.  Pairs with shm_mq_get_bytes_written's read
> +     * barrier.
> +     */
> +    pg_write_barrier();


Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Sat, Nov 4, 2017 at 5:55 PM, Andres Freund <andres@anarazel.de> wrote:
>> master: 21436.745, 20978.355, 19918.617
>> patch: 15896.573, 15880.652, 15967.176
>>
>> Median-to-median, that's about a 24% improvement.
>
> Neat!

With the attached stack of 4 patches, I get: 10811.768 ms, 10743.424
ms, 10632.006 ms, about a 49% improvement median-to-median.  Haven't
tried it on hydra or any other test cases yet.

skip-gather-project-v1.patch does what it says on the tin.  I still
don't have a test case for this, and I didn't find that it helped very
much, but it would probably help more in a test case with more
columns, and you said this looked like a big bottleneck in your
testing, so here you go.

shm-mq-less-spinlocks-v2.patch is updated from the version I posted
before based on your review comments.  I don't think it's really
necessary to mention that the 8-byte atomics have fallbacks here;
whatever needs to be said about that should be said in some central
place that talks about atomics, not in each user individually.  I
agree that there might be some further speedups possible by caching
some things in local memory, but I haven't experimented with that.

shm-mq-reduce-receiver-latch-set-v1.patch causes the receiver to only
consume input from the shared queue when the amount of unconsumed
input exceeds 1/4 of the queue size.  This caused a large performance
improvement in my testing because it causes the number of times the
latch gets set to drop dramatically. I experimented a bit with
thresholds of 1/8 and 1/2 before setting on 1/4; 1/4 seems to be
enough to capture most of the benefit.

remove-memory-leak-protection-v1.patch removes the memory leak
protection that Tom installed upon discovering that the original
version of tqueue.c leaked memory like crazy.  I think that it
shouldn't do that any more, courtesy of
6b65a7fe62e129d5c2b85cd74d6a91d8f7564608.  Assuming that's correct, we
can avoid a whole lot of tuple copying in Gather Merge and a much more
modest amount of overhead in Gather.  Since my test case exercised
Gather Merge, this bought ~400 ms or so.

Even with all of these patches applied, there's clearly still room for
more optimization, but MacOS's "sample" profiler seems to show that
the bottlenecks are largely shifting elsewhere:

Sort by top of stack, same collapsed (when >= 5):
        slot_getattr  (in postgres)        706
        slot_deform_tuple  (in postgres)        560
        ExecAgg  (in postgres)        378
        ExecInterpExpr  (in postgres)        372
        AllocSetAlloc  (in postgres)        319
        _platform_memmove$VARIANT$Haswell  (in
libsystem_platform.dylib)        314
        read  (in libsystem_kernel.dylib)        303
        heap_compare_slots  (in postgres)        296
        combine_aggregates  (in postgres)        273
        shm_mq_receive_bytes  (in postgres)        272

I'm probably not super-excited about spending too much more time
trying to make the _platform_memmove time (only 20% or so of which
seems to be due to the shm_mq stuff) or the shm_mq_receive_bytes time
go down until, say, somebody JIT's slot_getattr and slot_deform_tuple.
:-)

One thing that might be worth doing is hammering on the AllocSetAlloc
time.  I think that's largely caused by allocating space for heap
tuples and then freeing them and allocating space for new heap tuples.
Gather/Gather Merge are guilty of that, but I think there may be other
places in the executor with the same issue. Maybe we could have
fixed-size buffers for small tuples that just get reused and only
palloc for large tuples (cf. SLAB_SLOT_SIZE).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
On 2017-11-05 01:05:59 +0100, Robert Haas wrote:
> skip-gather-project-v1.patch does what it says on the tin.  I still
> don't have a test case for this, and I didn't find that it helped very
> much, but it would probably help more in a test case with more
> columns, and you said this looked like a big bottleneck in your
> testing, so here you go.

The query where that showed a big benefit was

SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;

(i.e a not very selective filter, and then just throwing the results away)

still shows quite massive benefits:

before:
set parallel_setup_cost=0;set parallel_tuple_cost=0;set min_parallel_table_scan_size=0;set
max_parallel_workers_per_gather=8;
tpch_5[17938][1]=# explain analyze SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;

┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│                                                                 QUERY PLAN

├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ Limit  (cost=635802.67..635802.69 rows=1 width=127) (actual time=8675.097..8675.097 rows=0 loops=1)
│   ->  Gather  (cost=0.00..635802.67 rows=27003243 width=127) (actual time=0.289..7904.849 rows=26989780 loops=1)
│         Workers Planned: 8
│         Workers Launched: 7
│         ->  Parallel Seq Scan on lineitem  (cost=0.00..635802.67 rows=3375405 width=127) (actual time=0.124..528.667
rows=3373722loops=8)
 
│               Filter: (l_suppkey > 5012)
│               Rows Removed by Filter: 376252
│ Planning time: 0.098 ms
│ Execution time: 8676.125 ms

└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
(9 rows)
after:
tpch_5[19754][1]=# EXPLAIN ANALYZE SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;

┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│                                                                 QUERY PLAN

├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ Limit  (cost=635802.67..635802.69 rows=1 width=127) (actual time=5984.916..5984.916 rows=0 loops=1)
│   ->  Gather  (cost=0.00..635802.67 rows=27003243 width=127) (actual time=0.214..5123.238 rows=26989780 loops=1)
│         Workers Planned: 8
│         Workers Launched: 7
│         ->  Parallel Seq Scan on lineitem  (cost=0.00..635802.67 rows=3375405 width=127) (actual time=0.025..649.887
rows=3373722loops=8)
 
│               Filter: (l_suppkey > 5012)
│               Rows Removed by Filter: 376252
│ Planning time: 0.076 ms
│ Execution time: 5986.171 ms

└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
(9 rows)

so there clearly is still benefit (this is scale 100, but that shouldn't
make much of a difference).

Did not review the code.

> shm-mq-reduce-receiver-latch-set-v1.patch causes the receiver to only
> consume input from the shared queue when the amount of unconsumed
> input exceeds 1/4 of the queue size.  This caused a large performance
> improvement in my testing because it causes the number of times the
> latch gets set to drop dramatically. I experimented a bit with
> thresholds of 1/8 and 1/2 before setting on 1/4; 1/4 seems to be
> enough to capture most of the benefit.

Hm. Is consuming the relevant part, or notifying the sender about it?  I
suspect most of the benefit can be captured by updating bytes read (and
similarly on the other side w/ bytes written), but not setting the latch
unless thresholds are reached.  The advantage of updating the value,
even without notifying the other side, is that in the common case that
the other side gets around to checking the queue without having blocked,
it'll see the updated value.  If that works, that'd address the issue
that we might wait unnecessarily in a number of common cases.

Did not review the code.

> remove-memory-leak-protection-v1.patch removes the memory leak
> protection that Tom installed upon discovering that the original
> version of tqueue.c leaked memory like crazy.  I think that it
> shouldn't do that any more, courtesy of
> 6b65a7fe62e129d5c2b85cd74d6a91d8f7564608.  Assuming that's correct, we
> can avoid a whole lot of tuple copying in Gather Merge and a much more
> modest amount of overhead in Gather.

Yup, that conceptually makes sense.

Did not review the code.


> Even with all of these patches applied, there's clearly still room for
> more optimization, but MacOS's "sample" profiler seems to show that
> the bottlenecks are largely shifting elsewhere:
>
> Sort by top of stack, same collapsed (when >= 5):
>         slot_getattr  (in postgres)        706
>         slot_deform_tuple  (in postgres)        560
>         ExecAgg  (in postgres)        378
>         ExecInterpExpr  (in postgres)        372
>         AllocSetAlloc  (in postgres)        319
>         _platform_memmove$VARIANT$Haswell  (in
> libsystem_platform.dylib)        314
>         read  (in libsystem_kernel.dylib)        303
>         heap_compare_slots  (in postgres)        296
>         combine_aggregates  (in postgres)        273
>         shm_mq_receive_bytes  (in postgres)        272

Interesting.  Here it's
+    8.79%  postgres  postgres            [.] ExecAgg
+    6.52%  postgres  postgres            [.] slot_deform_tuple
+    5.65%  postgres  postgres            [.] slot_getattr
+    4.59%  postgres  postgres            [.] shm_mq_send_bytes
+    3.66%  postgres  postgres            [.] ExecInterpExpr
+    3.44%  postgres  postgres            [.] AllocSetAlloc
+    3.08%  postgres  postgres            [.] heap_fill_tuple
+    2.34%  postgres  postgres            [.] heap_getnext
+    2.25%  postgres  postgres            [.] finalize_aggregates
+    2.08%  postgres  libc-2.24.so        [.] __memmove_avx_unaligned_erms
+    2.05%  postgres  postgres            [.] heap_compare_slots
+    1.99%  postgres  postgres            [.] execTuplesMatch
+    1.83%  postgres  postgres            [.] ExecStoreTuple
+    1.83%  postgres  postgres            [.] shm_mq_receive
+    1.81%  postgres  postgres            [.] ExecScan


> I'm probably not super-excited about spending too much more time
> trying to make the _platform_memmove time (only 20% or so of which
> seems to be due to the shm_mq stuff) or the shm_mq_receive_bytes time
> go down until, say, somebody JIT's slot_getattr and slot_deform_tuple.
> :-)

Hm, let's say somebody were working on something like that. In that case
the benefits for this precise plan wouldn't yet be that big because a
good chunk of slot_getattr calls come from execTuplesMatch() which
doesn't really provide enough context to do JITing (when used for
hashaggs, there is more so it's JITed). Similarly gather merge's
heap_compare_slots() doesn't provide such context.

It's about ~9% currently, largely due to the faster aggregate
invocation. But the big benefit here would be all the deforming and the
comparisons...

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Sun, Nov 5, 2017 at 2:24 AM, Andres Freund <andres@anarazel.de> wrote:
>> shm-mq-reduce-receiver-latch-set-v1.patch causes the receiver to only
>> consume input from the shared queue when the amount of unconsumed
>> input exceeds 1/4 of the queue size.  This caused a large performance
>> improvement in my testing because it causes the number of times the
>> latch gets set to drop dramatically. I experimented a bit with
>> thresholds of 1/8 and 1/2 before setting on 1/4; 1/4 seems to be
>> enough to capture most of the benefit.
>
> Hm. Is consuming the relevant part, or notifying the sender about it?  I
> suspect most of the benefit can be captured by updating bytes read (and
> similarly on the other side w/ bytes written), but not setting the latch
> unless thresholds are reached.  The advantage of updating the value,
> even without notifying the other side, is that in the common case that
> the other side gets around to checking the queue without having blocked,
> it'll see the updated value.  If that works, that'd address the issue
> that we might wait unnecessarily in a number of common cases.

I think it's mostly notifying the sender.  Sending SIGUSR1 over and
over again isn't free, and it shows up in profiling.  I thought about
what you're proposing here, but it seemed more complicated to
implement, and I'm not sure that there would be any benefit.  The
reason is because, with these patches applied, even a radical
expansion of the queue size doesn't produce much incremental
performance benefit at least in the test case I was using.  I can
increase the size of the tuple queues 10x or 100x and it really
doesn't help very much.  And consuming sooner (but sometimes without
notifying) seems very similar to making the queue slightly bigger.

Also, what I see in general is that the CPU usage on the leader goes
to 100% but the workers are only maybe 20% saturated.  Making the
leader work any harder than absolutely necessarily therefore seems
like it's probably counterproductive.  I may be wrong, but it looks to
me like most of the remaining overhead seems to come from (1) the
synchronization overhead associated with memory barriers and (2)
backend-private work that isn't as cheap as would be ideal - e.g.
palloc overhead.

> Interesting.  Here it's
> +    8.79%  postgres  postgres            [.] ExecAgg
> +    6.52%  postgres  postgres            [.] slot_deform_tuple
> +    5.65%  postgres  postgres            [.] slot_getattr
> +    4.59%  postgres  postgres            [.] shm_mq_send_bytes
> +    3.66%  postgres  postgres            [.] ExecInterpExpr
> +    3.44%  postgres  postgres            [.] AllocSetAlloc
> +    3.08%  postgres  postgres            [.] heap_fill_tuple
> +    2.34%  postgres  postgres            [.] heap_getnext
> +    2.25%  postgres  postgres            [.] finalize_aggregates
> +    2.08%  postgres  libc-2.24.so        [.] __memmove_avx_unaligned_erms
> +    2.05%  postgres  postgres            [.] heap_compare_slots
> +    1.99%  postgres  postgres            [.] execTuplesMatch
> +    1.83%  postgres  postgres            [.] ExecStoreTuple
> +    1.83%  postgres  postgres            [.] shm_mq_receive
> +    1.81%  postgres  postgres            [.] ExecScan

More or less the same functions, somewhat different order.

>> I'm probably not super-excited about spending too much more time
>> trying to make the _platform_memmove time (only 20% or so of which
>> seems to be due to the shm_mq stuff) or the shm_mq_receive_bytes time
>> go down until, say, somebody JIT's slot_getattr and slot_deform_tuple.
>> :-)
>
> Hm, let's say somebody were working on something like that. In that case
> the benefits for this precise plan wouldn't yet be that big because a
> good chunk of slot_getattr calls come from execTuplesMatch() which
> doesn't really provide enough context to do JITing (when used for
> hashaggs, there is more so it's JITed). Similarly gather merge's
> heap_compare_slots() doesn't provide such context.
>
> It's about ~9% currently, largely due to the faster aggregate
> invocation. But the big benefit here would be all the deforming and the
> comparisons...

I'm not sure I follow you here.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
"Jim Van Fleet"
Date:
Ran this change with hammerdb  on a power 8 firestone

with 2 socket, 20 core
9.6 base        --  451991 NOPM
0926_master -- 464385 NOPM
11_04master -- 449177 NOPM
11_04_patch -- 431423 NOPM
-- two socket patch is a little down from previous base runs

With one socket
9.6 base          -- 393727 NOPM
v10rc1_base -- 350958 NOPM
11_04master -- 306506 NOPM
11_04_patch -- 313179 NOPM
--  one socket 11_04 master is quite a bit down from 9.6 and v10rc1_base -- the patch is up a bit over the base

Net -- the patch is about the same as current base on two socket, and on one socket  -- consistent with your pgbench (?) findings

As an aside, it is perhaps a worry that one socket is down over 20% from 9.6 and over 10% from v10rc1

Jim

pgsql-hackers-owner@postgresql.org wrote on 11/04/2017 06:08:31 AM:

> On hydra (PPC), these changes didn't help much.  Timings:
>
> master: 29605.582, 29753.417, 30160.485
> patch: 28218.396, 27986.951, 26465.584
>
> That's about a 5-6% improvement.  On my MacBook, though, the
> improvement was quite a bit more:
>
> master: 21436.745, 20978.355, 19918.617
> patch: 15896.573, 15880.652, 15967.176
>
> Median-to-median, that's about a 24% improvement.
>
> Any reviews appreciated.
>
> Thanks,
>
> --
> Robert Haas
> EnterpriseDB:
https://urldefense.proofpoint.com/v2/url?
> u=http-3A__www.enterprisedb.com&d=DwIBaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=Glx_6-ZyGFPdLCdb8Jr7QJHrJIbUJO1z6oi-JHO8Htk&m=-
> I8r3tfguIVgEpNumrjWTKOGkJWIbHQNT2M2-02-8cU&s=39p2vefOiiZS9ZooPYkZ97U66hw5osqmkCGcikgZCik&e=
> The Enterprise PostgreSQL Company
> [attachment "shm-mq-less-spinlocks-v1.2.patch" deleted by Jim Van
> Fleet/Austin/Contr/IBM]
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
>
https://urldefense.proofpoint.com/v2/url?
> u=http-3A__www.postgresql.org_mailpref_pgsql-2Dhackers&d=DwIDAg&c=jf_iaSHvJObTbx-
> siA1ZOg&r=Glx_6-ZyGFPdLCdb8Jr7QJHrJIbUJO1z6oi-JHO8Htk&m=-
> I8r3tfguIVgEpNumrjWTKOGkJWIbHQNT2M2-02-8cU&s=aL2TI3avKN4drlXk915UM2RFixyvUsZ2axDjB2FG9G0&e=

Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
Hi,

On November 5, 2017 1:33:24 PM PST, Jim Van Fleet <vanfleet@us.ibm.com> wrote:
>Ran this change with hammerdb  on a power 8 firestone
>
>with 2 socket, 20 core
>9.6 base        --  451991 NOPM
>0926_master -- 464385 NOPM
>11_04master -- 449177 NOPM
>11_04_patch -- 431423 NOPM
>-- two socket patch is a little down from previous base runs
>
>With one socket
>9.6 base          -- 393727 NOPM
>v10rc1_base -- 350958 NOPM
>11_04master -- 306506 NOPM
>11_04_patch -- 313179 NOPM
>--  one socket 11_04 master is quite a bit down from 9.6 and
>v10rc1_base
>-- the patch is up a bit over the base
>
>Net -- the patch is about the same as current base on two socket, and
>on
>one socket  -- consistent with your pgbench (?) findings
>
>As an aside, it is perhaps a worry that one socket is down over 20%
>from
>9.6 and over 10% from v10rc1

What query(s) did you measure?

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Sun, Nov 5, 2017 at 6:54 AM, Andres Freund <andres@anarazel.de> wrote
> On 2017-11-05 01:05:59 +0100, Robert Haas wrote:
>> skip-gather-project-v1.patch does what it says on the tin.  I still
>> don't have a test case for this, and I didn't find that it helped very
>> much,

I am also wondering in which case it can help and I can't think of the
case.  Basically, as part of projection in the gather, I think we are
just deforming the tuple which we anyway need to perform before
sending the tuple to the client (printtup) or probably at the upper
level of the node.

>> and you said this looked like a big bottleneck in your
>> testing, so here you go.
>

Is it possible that it shows the bottleneck only for 'explain analyze'
statement as we don't deform the tuple for that at a later stage?

> The query where that showed a big benefit was
>
> SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;
>
> (i.e a not very selective filter, and then just throwing the results away)
>
> still shows quite massive benefits:
>
> before:
> set parallel_setup_cost=0;set parallel_tuple_cost=0;set min_parallel_table_scan_size=0;set
max_parallel_workers_per_gather=8;
> tpch_5[17938][1]=# explain analyze SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;
>
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
> │                                                                 QUERY PLAN
>
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
> │ Limit  (cost=635802.67..635802.69 rows=1 width=127) (actual time=8675.097..8675.097 rows=0 loops=1)
> │   ->  Gather  (cost=0.00..635802.67 rows=27003243 width=127) (actual time=0.289..7904.849 rows=26989780 loops=1)
> │         Workers Planned: 8
> │         Workers Launched: 7
> │         ->  Parallel Seq Scan on lineitem  (cost=0.00..635802.67 rows=3375405 width=127) (actual
time=0.124..528.667rows=3373722 loops=8)
 
> │               Filter: (l_suppkey > 5012)
> │               Rows Removed by Filter: 376252
> │ Planning time: 0.098 ms
> │ Execution time: 8676.125 ms
>
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
> (9 rows)
> after:
> tpch_5[19754][1]=# EXPLAIN ANALYZE SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;
>
┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
> │                                                                 QUERY PLAN
>
├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
> │ Limit  (cost=635802.67..635802.69 rows=1 width=127) (actual time=5984.916..5984.916 rows=0 loops=1)
> │   ->  Gather  (cost=0.00..635802.67 rows=27003243 width=127) (actual time=0.214..5123.238 rows=26989780 loops=1)
> │         Workers Planned: 8
> │         Workers Launched: 7
> │         ->  Parallel Seq Scan on lineitem  (cost=0.00..635802.67 rows=3375405 width=127) (actual
time=0.025..649.887rows=3373722 loops=8)
 
> │               Filter: (l_suppkey > 5012)
> │               Rows Removed by Filter: 376252
> │ Planning time: 0.076 ms
> │ Execution time: 5986.171 ms
>
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
> (9 rows)
>
> so there clearly is still benefit (this is scale 100, but that shouldn't
> make much of a difference).
>

Do you see the benefit if the query is executed without using Explain Analyze?


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
"Jim Van Fleet"
Date:

Andres Freund <andres@anarazel.de> wrote on 11/05/2017 03:40:15 PM:

hammerdb, in this configuration, runs a variant of tpcc
>
> What query(s) did you measure?
>
> Andres
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>

Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:

On November 6, 2017 7:30:49 AM PST, Jim Van Fleet <vanfleet@us.ibm.com> wrote:
>Andres Freund <andres@anarazel.de> wrote on 11/05/2017 03:40:15 PM:
>
>hammerdb, in this configuration, runs a variant of tpcc

Hard to believe that any of the changes here are relevant in that case - this is parallelism specific stuff. Whereas
tpccis oltp, right? 

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
"Jim Van Fleet"
Date:
correct

> >hammerdb, in this configuration, runs a variant of tpcc
>
> Hard to believe that any of the changes here are relevant in that
> case - this is parallelism specific stuff. Whereas tpcc is oltp, right?
>
> Andres
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>

Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
Hi,

Please don't top-quote on postgresql lists.

On 2017-11-06 09:44:24 -0600, Jim Van Fleet wrote:
> > >hammerdb, in this configuration, runs a variant of tpcc
> > 
> > Hard to believe that any of the changes here are relevant in that 
> > case - this is parallelism specific stuff. Whereas tpcc is oltp, right?

> correct

In that case, could you provide before/after profiles of the performance
changing runs?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
"Jim Van Fleet"
Date:
Hi --

pgsql-hackers-owner@postgresql.org wrote on 11/06/2017 09:47:22 AM:

> From: Andres Freund <andres@anarazel.de>


>
> Hi,
>
> Please don't top-quote on postgresql lists.

Sorry
>
> On 2017-11-06 09:44:24 -0600, Jim Van Fleet wrote:
> > > >hammerdb, in this configuration, runs a variant of tpcc
> > >
> > > Hard to believe that any of the changes here are relevant in that
> > > case - this is parallelism specific stuff. Whereas tpcc is oltp, right?
>
> > correct
>
> In that case, could you provide before/after profiles of the performance
> changing runs?

sure -- happy to share -- gzipped files (which include trace, perf, netstat, system data) are are large (9G and 13G)
Should I post them on the list or somewhere else (or trim them -- if so, what would you like to have?)
>
Jim

Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
Hi,

On 2017-11-06 10:56:43 +0530, Amit Kapila wrote:
> On Sun, Nov 5, 2017 at 6:54 AM, Andres Freund <andres@anarazel.de> wrote
> > On 2017-11-05 01:05:59 +0100, Robert Haas wrote:
> >> skip-gather-project-v1.patch does what it says on the tin.  I still
> >> don't have a test case for this, and I didn't find that it helped very
> >> much,
> 
> I am also wondering in which case it can help and I can't think of the
> case.

I'm confused?  Isn't it fairly obvious that unnecessarily projecting
at the gather node is wasteful? Obviously depending on the query you'll
see smaller / bigger gains, but that there's beenfdits should be fairly obvious?


> Basically, as part of projection in the gather, I think we are just
> deforming the tuple which we anyway need to perform before sending the
> tuple to the client (printtup) or probably at the upper level of the
> node.

But in most cases you're not going to print millions of tuples, instead
you're going to apply some further operators ontop (e.g. the
OFFSET/LIMIT in my example).

> >> and you said this looked like a big bottleneck in your
> >> testing, so here you go.

> Is it possible that it shows the bottleneck only for 'explain analyze'
> statement as we don't deform the tuple for that at a later stage?

Doesn't matter, there's a OFFSET/LIMIT ontop of the query. Could just as
well be a sort node or something.


> > The query where that showed a big benefit was
> >
> > SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;
> >
> > (i.e a not very selective filter, and then just throwing the results away)
> >
> > still shows quite massive benefits:
> 
> Do you see the benefit if the query is executed without using Explain Analyze?

Yes.

Before:
tpch_5[11878][1]=# SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET 1000000000 LIMIT 1;^[[A
...
Time: 7590.196 ms (00:07.590)

After:
Time: 3862.955 ms (00:03.863)


Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Wed, Nov 8, 2017 at 1:02 AM, Andres Freund <andres@anarazel.de> wrote:
> Hi,
>
> On 2017-11-06 10:56:43 +0530, Amit Kapila wrote:
>> On Sun, Nov 5, 2017 at 6:54 AM, Andres Freund <andres@anarazel.de> wrote
>> > On 2017-11-05 01:05:59 +0100, Robert Haas wrote:
>> >> skip-gather-project-v1.patch does what it says on the tin.  I still
>> >> don't have a test case for this, and I didn't find that it helped very
>> >> much,
>>
>> I am also wondering in which case it can help and I can't think of the
>> case.
>
> I'm confused?  Isn't it fairly obvious that unnecessarily projecting
> at the gather node is wasteful? Obviously depending on the query you'll
> see smaller / bigger gains, but that there's beenfdits should be fairly obvious?
>
>

I agree that there could be benefits depending on the statement.  I
initially thought that we are kind of re-evaluating the expressions in
target list as part of projection even if worker backend has already
done that, but that was not the case and instead, we are deforming the
tuples sent by workers.  Now, I think as a general principle it is a
good idea to delay the deforming as much as possible.

About the patch,

 /*
- * Initialize result tuple type and projection info.
- */
- ExecAssignResultTypeFromTL(&gatherstate->ps);
- ExecAssignProjectionInfo(&gatherstate->ps, NULL);
-

- /* * Initialize funnel slot to same tuple descriptor as outer plan. */ if (!ExecContextForcesOids(&gatherstate->ps,
&hasoid))
@@ -115,6 +109,12 @@ ExecInitGather(Gather *node, EState *estate, int eflags) tupDesc =
ExecTypeFromTL(outerNode->targetlist,hasoid); ExecSetSlotDescriptor(gatherstate->funnel_slot, tupDesc);
 

+ /*
+ * Initialize result tuple type and projection info.
+ */
+ ExecAssignResultTypeFromTL(&gatherstate->ps);
+ ExecConditionalAssignProjectionInfo(&gatherstate->ps, tupDesc, OUTER_VAR);
+

This change looks suspicious to me.  I think here we can't use the
tupDesc constructed from targetlist.  One problem, I could see is that
the check for hasOid setting in tlist_matches_tupdesc won't give the
correct answer.   In case of the scan, we use the tuple descriptor
stored in relation descriptor which will allow us to take the right
decision in tlist_matches_tupdesc.  If you try the statement CREATE
TABLE as_select1 AS SELECT * FROM pg_class WHERE relkind = 'r'; in
force_parallel_mode=regress, then you can reproduce the problem I am
trying to highlight.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Thu, Nov 9, 2017 at 12:08 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> This change looks suspicious to me.  I think here we can't use the
> tupDesc constructed from targetlist.  One problem, I could see is that
> the check for hasOid setting in tlist_matches_tupdesc won't give the
> correct answer.   In case of the scan, we use the tuple descriptor
> stored in relation descriptor which will allow us to take the right
> decision in tlist_matches_tupdesc.  If you try the statement CREATE
> TABLE as_select1 AS SELECT * FROM pg_class WHERE relkind = 'r'; in
> force_parallel_mode=regress, then you can reproduce the problem I am
> trying to highlight.

I tried this, but nothing seemed to be obviously broken.  Then I
realized that the CREATE TABLE command wasn't using parallelism, so I
retried with parallel_setup_cost = 0, parallel_tuple_cost = 0, and
min_parallel_table_scan_size = 0.  That got it to use parallel query,
but I still don't see anything broken.  Can you clarify further?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Fri, Nov 10, 2017 at 12:05 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Thu, Nov 9, 2017 at 12:08 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> This change looks suspicious to me.  I think here we can't use the
>> tupDesc constructed from targetlist.  One problem, I could see is that
>> the check for hasOid setting in tlist_matches_tupdesc won't give the
>> correct answer.   In case of the scan, we use the tuple descriptor
>> stored in relation descriptor which will allow us to take the right
>> decision in tlist_matches_tupdesc.  If you try the statement CREATE
>> TABLE as_select1 AS SELECT * FROM pg_class WHERE relkind = 'r'; in
>> force_parallel_mode=regress, then you can reproduce the problem I am
>> trying to highlight.
>
> I tried this, but nothing seemed to be obviously broken.  Then I
> realized that the CREATE TABLE command wasn't using parallelism, so I
> retried with parallel_setup_cost = 0, parallel_tuple_cost = 0, and
> min_parallel_table_scan_size = 0.  That got it to use parallel query,
> but I still don't see anything broken.  Can you clarify further?
>

Have you set force_parallel_mode=regress; before running the
statement?  If so, then why you need to tune other parallel query
related parameters?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Thu, Nov 9, 2017 at 9:31 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Have you set force_parallel_mode=regress; before running the
> statement?

Yes, I tried that first.

> If so, then why you need to tune other parallel query
> related parameters?

Because I couldn't get it to break the other way, I then tried this.

Instead of asking me what I did, can you tell me what I need to do?
Maybe a self-contained reproducible test case including exactly what
goes wrong on your end?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Fri, Nov 10, 2017 at 8:36 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Thu, Nov 9, 2017 at 9:31 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> Have you set force_parallel_mode=regress; before running the
>> statement?
>
> Yes, I tried that first.
>
>> If so, then why you need to tune other parallel query
>> related parameters?
>
> Because I couldn't get it to break the other way, I then tried this.
>
> Instead of asking me what I did, can you tell me what I need to do?
> Maybe a self-contained reproducible test case including exactly what
> goes wrong on your end?
>

I think we are missing something very basic because you should see the
failure by executing that statement in force_parallel_mode=regress
even in a freshly created database.  I guess the missing point is that
I am using assertions enabled build and probably you are not (If this
is the reason, then it should have striked me first time).  Anyway, I
am writing steps to reproduce the issue.

1. initdb
2. start server
3. connect using psql
4. set force_parallel_mode=regress;
5. Create Table as_select1 AS SELECT * FROM pg_class WHERE relkind = 'r';


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Fri, Nov 10, 2017 at 9:48 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Fri, Nov 10, 2017 at 8:36 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>> On Thu, Nov 9, 2017 at 9:31 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> Have you set force_parallel_mode=regress; before running the
>>> statement?
>>
>> Yes, I tried that first.
>>
>>> If so, then why you need to tune other parallel query
>>> related parameters?
>>
>> Because I couldn't get it to break the other way, I then tried this.
>>
>> Instead of asking me what I did, can you tell me what I need to do?
>> Maybe a self-contained reproducible test case including exactly what
>> goes wrong on your end?
>>
>
> I think we are missing something very basic because you should see the
> failure by executing that statement in force_parallel_mode=regress
> even in a freshly created database.
>

I am seeing the assertion failure as below on executing the above
mentioned Create statement:

TRAP: FailedAssertion("!(!(tup->t_data->t_infomask & 0x0008))", File:
"heapam.c", Line: 2634)
server closed the connection unexpectedly
This probably means the server terminated abnormally


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Fri, Nov 10, 2017 at 5:44 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> I am seeing the assertion failure as below on executing the above
> mentioned Create statement:
>
> TRAP: FailedAssertion("!(!(tup->t_data->t_infomask & 0x0008))", File:
> "heapam.c", Line: 2634)
> server closed the connection unexpectedly
> This probably means the server terminated abnormally

OK, I see it now.  Not sure why I couldn't reproduce this before.

I think the problem is not actually with the code that I just wrote.
What I'm seeing is that the slot descriptor's tdhasoid value is false
for both the funnel slot and the result slot; therefore, we conclude
that no projection is needed to remove the OIDs.  That seems to make
sense: if the funnel slot doesn't have OIDs and the result slot
doesn't have OIDs either, then we don't need to remove them.
Unfortunately, even though the funnel slot descriptor is marked
tdhashoid = false, the tuples being stored there actually do have
OIDs.  And that is because they are coming from the underlying
sequential scan, which *also* has OIDs despite the fact that tdhasoid
for it's slot is false.

This had me really confused until I realized that there are two
processes involved.  The problem is that we don't pass eflags down to
the child process -- so in the user backend, everybody agrees that
there shouldn't be OIDs anywhere, because EXEC_FLAG_WITHOUT_OIDS is
set.  In the parallel worker, however, it's not set, so the worker
feels free to do whatever comes naturally, and in this test case that
happens to be returning tuples with OIDs.  Patch for this attached.

I also noticed that the code that initializes the funnel slot is using
its own PlanState rather than the outer plan's PlanState to call
ExecContextForcesOids.  I think that's formally incorrect, because the
goal is to end up with a slot that is the same as the outer plan's
slot.  It doesn't matter because ExecContextForcesOids doesn't care
which PlanState it gets passed, but the comments in
ExecContextForcesOids imply that somebody it might, so perhaps it's
best to clean that up.  Patch for this attached, too.

And here are the other patches again, too.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Rafia Sabih
Date:
On Fri, Nov 10, 2017 at 8:39 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, Nov 10, 2017 at 5:44 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> I am seeing the assertion failure as below on executing the above
>> mentioned Create statement:
>>
>> TRAP: FailedAssertion("!(!(tup->t_data->t_infomask & 0x0008))", File:
>> "heapam.c", Line: 2634)
>> server closed the connection unexpectedly
>> This probably means the server terminated abnormally
>
> OK, I see it now.  Not sure why I couldn't reproduce this before.
>
> I think the problem is not actually with the code that I just wrote.
> What I'm seeing is that the slot descriptor's tdhasoid value is false
> for both the funnel slot and the result slot; therefore, we conclude
> that no projection is needed to remove the OIDs.  That seems to make
> sense: if the funnel slot doesn't have OIDs and the result slot
> doesn't have OIDs either, then we don't need to remove them.
> Unfortunately, even though the funnel slot descriptor is marked
> tdhashoid = false, the tuples being stored there actually do have
> OIDs.  And that is because they are coming from the underlying
> sequential scan, which *also* has OIDs despite the fact that tdhasoid
> for it's slot is false.
>
> This had me really confused until I realized that there are two
> processes involved.  The problem is that we don't pass eflags down to
> the child process -- so in the user backend, everybody agrees that
> there shouldn't be OIDs anywhere, because EXEC_FLAG_WITHOUT_OIDS is
> set.  In the parallel worker, however, it's not set, so the worker
> feels free to do whatever comes naturally, and in this test case that
> happens to be returning tuples with OIDs.  Patch for this attached.
>
> I also noticed that the code that initializes the funnel slot is using
> its own PlanState rather than the outer plan's PlanState to call
> ExecContextForcesOids.  I think that's formally incorrect, because the
> goal is to end up with a slot that is the same as the outer plan's
> slot.  It doesn't matter because ExecContextForcesOids doesn't care
> which PlanState it gets passed, but the comments in
> ExecContextForcesOids imply that somebody it might, so perhaps it's
> best to clean that up.  Patch for this attached, too.
>
> And here are the other patches again, too.
>
I tested this patch on TPC-H benchmark queries and here are the details.
Setup:
commit: 42de8a0255c2509bf179205e94b9d65f9d6f3cf9
TPC-H scale factor = 20
work_mem = 1GB
max_parallel_workers_per_gather = 4
random_page_cost = seq_page_cost = 0.1

Results:
Case 1: patches applied = skip-project-gather_v1 +
shm-mq-reduce-receiver-latch-set-v1 + shm-mq-less-spinlocks-v2 +
remove-memory-leak-protection-v1
No change in execution time performance for any of the 22 queries.

Case 2: patches applied as in case 1 +
   a) increased PARALLEL_TUPLE_QUEUE_SIZE to 655360
      No significant change in performance in any query
   b) increased PARALLEL_TUPLE_QUEUE_SIZE to 65536 * 50
      Performance improved from 20s to 11s for Q12
   c) increased PARALLEL_TUPLE_QUEUE_SIZE to 6553600
     Q12 shows improvement in performance from 20s to 7s

Case 3: patch applied = faster_gather_v3 as posted at [1]
Q12 shows improvement in performance from 20s to 8s

Please find the attached file for the explain analyse outputs in all
of the aforementioned cases.
I am next working on analysing the effect of these patches on gather
performance in other cases.

[1]  https://www.postgresql.org/message-id/CAOGQiiMOWJwfaegpERkvv3t6tY2CBdnhWHWi1iCfuMsCC98a4g%40mail.gmail.com
-- 
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/

Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Tue, Nov 14, 2017 at 7:31 AM, Rafia Sabih
<rafia.sabih@enterprisedb.com> wrote:
> Case 2: patches applied as in case 1 +
>    a) increased PARALLEL_TUPLE_QUEUE_SIZE to 655360
>       No significant change in performance in any query
>    b) increased PARALLEL_TUPLE_QUEUE_SIZE to 65536 * 50
>       Performance improved from 20s to 11s for Q12
>    c) increased PARALLEL_TUPLE_QUEUE_SIZE to 6553600
>      Q12 shows improvement in performance from 20s to 7s
>
> Case 3: patch applied = faster_gather_v3 as posted at [1]
> Q12 shows improvement in performance from 20s to 8s

I think that we need a little bit deeper analysis here to draw any
firm conclusions.  My own testing showed about a 2x performance
improvement with all 4 patches applied on a query that did a Gather
Merge with many rows.  Now, your testing shows the patches aren't
helping at all.  But what accounts for the difference between your
results?  Without some analysis of that question, this is just a data
point that probably doesn't get us very far.

I suspect that one factor is that many of the queries actually send
very few rows through the Gather.  You didn't send EXPLAIN ANALYZE
outputs for these runs, but I went back and looked at some old tests I
did on a small scale factor and found that, on those tests, Q2, Q6,
Q13, Q14, and Q15 didn't use parallelism at all, while Q1, Q4, Q5, Q7,
Q8, Q9, Q11, Q12, Q19, and Q22 used parallelism, but sent less than
100 rows through Gather.  Obviously, speeding up Gather isn't going to
help at all when only a tiny number of rows are being sent through it.
The remaining seven queries sent the following numbers of rows through
Gather:

3:               ->  Gather Merge  (cost=708490.45..1110533.81
rows=3175456 width=0) (actual time=21932.675..22150.587 rows=118733
loops=1)

10:               ->  Gather Merge  (cost=441168.55..513519.51
rows=574284 width=0) (actual time=15281.370..16319.895 rows=485230
loops=1)

16:                                 ->  Gather
(cost=1000.00..47363.41 rows=297653 width=40) (actual
time=0.414..272.808 rows=297261 loops=1)

17:                           ->  Gather  (cost=1000.00..12815.71
rows=2176 width=4) (actual time=2.105..152.966 rows=1943 loops=1)
17:                                 ->  Gather Merge
(cost=2089193.30..3111740.98 rows=7445304 width=0) (actual
time=14071.064..33135.996 rows=9973440 loops=1)

18:               ->  Gather Merge  (cost=3271973.63..7013135.71
rows=29992968 width=0) (actual time=81581.450..81581.594 rows=112
loops=1)

20:                                       ->  Gather
(cost=1000.00..13368.31 rows=20202 width=4) (actual time=0.361..19.035
rows=21761 loops=1)

21:                     ->  Gather  (cost=1024178.86..1024179.27
rows=4 width=34) (actual time=12367.266..12377.991 rows=17176 loops=1)

Of those, Q18 is probably uninteresting because it only sends 112
rows, and Q20 and Q16 are probably uninteresting because the Gather
only executed for 19 ms and 272 ms respectively.  Q21 doesn't look
interesting because we ran for 12337.991 seconds and only sent 17176
rows - so the bottleneck is probably generating the tuples, not
sending them.  The places where you'd expect the patch set to help are
where a lot of rows are being sent through the Gather or Gather Merge
node very quickly - so with these plans, probably Q17 is the only that
would have the best chance of going faster with these patches and
maybe Q3 might benefit a bit.

Now obviously your plans are different -- otherwise you couldn't be
seeing a speedup on Q12.  So you have to look at the plans and try to
understand what the big picture is here.  Spending a lot of time
running queries where the time taken by Gather is not the bottleneck
is not a good way to figure out whether we've successfully sped up
Gather.  What would be more useful?  How about:

- Once you've identified the queries where Gather seems like it might
be a bottleneck, run perf without the patch set and see whether Gather
or shm_mq related functions show up high in the profile.  If they do,
run perf which the patch set and see if they become less prominent.

- Try running the test cases that Andres and I tried with and without
the patch set.  See if it helps on those queries.  That will help
verify that your testing procedure is correct, and might also reveal
differences in the effectiveness of that patch set on different
hardware.  You could try this experiment on both PPC and x64, or on
both Linux and MacOS, to see whether CPU architecture and/or operating
system plays a role in the effectiveness of the patch.

I think it's a valid finding that increasing the size of the tuple
queue makes Q12 run faster, but I think that's not because it makes
Gather itself any faster.  Rather, it's because there are fewer
pipeline stalls.  With Gather Merge, whenever a tuple queue becomes
empty, the leader becomes unable to return any more tuples until the
process whose queue is empty generates at least one new tuple.  If
there are multiple workers with non-full queues at the same time then
they can all work on generating tuples in parallel, but if every queue
except one is full, and that queue is empty, then there's nothing to
do but wait for that process.  I suspect that is fairly common with
the plan you're getting for Q12, which I think looks like this:

Limit
-> GroupAggregate -> Gather Merge   -> Nested Loop     -> Parallel Index Scan     -> Index Scan

Imagine that you have two workers, and say one of them starts up
slightly faster than the other.  So it fills up its tuple queue with
tuples by executing the nested loop.  Then the queue is full, so it
sleeps.  Now the other worker does the same thing.  Ignoring the
leader for the moment, what will happen next is that all of the tuples
produced worker #1 are smaller than all of the tuples from worker #2,
so the gather merge will read and return all of the tuples from the
first worker while reading only a single tuple from the second one.
Then it reverses - we read one more tuple from the first worker while
reading and returning all the tuples from the second one.  We're not
reading from the queues evenly, so that the workers keep busy, but are
instead reading long runs of tuples from the same worker while
everybody else waits.  Therefore, we're not really getting any
parallelism at all - for the most part, only one worker runs at a
time.  Here's a fragment of EXPLAIN ANALYZE output from one of your
old emails on this topic[1]:
        ->  Gather Merge  (cost=1001.19..2721491.60 rows=592261
width=27) (actual time=7.806..44794.334 rows=311095 loops=1)              Workers Planned: 4              Workers
Launched:4              ->  Nested Loop  (cost=1.13..2649947.55 rows=148065
 
width=27) (actual time=0.342..9071.892 rows=62257 loops=5)

You can see that we've got 5 participants here (leader + 4 workers).
Each one spends an average of 9.07 seconds executing the nested loop,
but they take 44.8 seconds to finish the whole thing.  If they ran
completely sequentially it would have taken 45.4 seconds - so there
was only 0.6 seconds of overlapping execution.  If we crank up the
queue size, we will eventually get it large enough that all of the
workers can the plan to completion without filling up the queue, and
then things will indeed get much faster, but again, not because Gather
is any faster, just because then all workers will be running at the
same time.

In some sense, that's OK: a speedup is a speedup.  However, to get the
maximum speedup with this sort of plan, it needs to big enough that it
never fills up.  How big that is depends on the data set size.  If we
make the queue 100x bigger based on these test results, and then you
test on a data set that is 10x bigger, you'll come back and recommend
again making it 10x bigger, because it will again produce a huge
performance gain.  On the other hand, if you test a data set that's
only 2x bigger, you'll come back and recommend making the queue 2x
bigger, because that will be good enough.  If you test a data set
that's only half as big as this one, you'll probably find that you
don't need to enlarge the queue 100x -- 50x will be good enough.
There is no size that we can make the queue that will be good enough
in general: somebody can always pick a data set large enough that the
queues fill up, and after that only one worker will run at a time on
this plan-shape.  Contrariwise, somebody can always pick a small
enough data set that a given queue size just wastes memory without
helping performance.

Similarly, I think that faster_gather_v3.patch is effectively here
because it lets all the workers run at the same time, not because
Gather gets any faster.  The local queue is 100x bigger than the
shared queue, and that's big enough that the workers never have to
block, so they all run at the same time and things are great.  I don't
see much advantage in pursuing this route.  For the local queue to
make sense it needs to have some advantage that we can't get by just
making the shared queue bigger, which is easier and less code.  The
original idea was that we'd reduce latch traffic and spinlock
contention by moving data from the local queue to the shared queue in
bulk, but the patches I posted attack those problems more directly.

As a general point, I think we need to separate the two goals of (1)
making Gather/Gather Merge faster and (2) reducing Gather Merge
related pipeline stalls.  The patches I posted do (1).  With respect
to (2), I can think of three possible approaches:

1. Make the tuple queues bigger, at least for Gather Merge.  We can't
fix the problem that the data might be too big to let all workers run
to completion before blocking, but we could make it less likely by
allowing for more space, scaled by work_mem or some new GUC.

2. Have the planner figure out that this is going to be a problem.   I
kind of wonder how often it really makes sense to feed a Gather Merge
from a Parallel Index Scan, even indirectly.  I wonder if this would
run faster if it didn't use parallelism at all.  If there are enough
intermediate steps between the Parallel Index Scan and the Gather
Merge, then the Gather Merge strategy probably makes sense, but in
general it seems pretty sketchy to break the ordered stream of data
that results from an index scan across many processes and then almost
immediately try to reassemble that stream into sorted order.  That's
kind of lame.

3. Have Parallel Index Scan do a better job distributing the tuples
randomly across the workers.  The problem here happens because, if we
sat and watched which worker produced the next tuple, it wouldn't look
like 1,2,3,4,1,2,3,4,... but rather
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,(many more times),1,1,1,2,2,2,2,2,....
If we could somehow scramble the distribution of tuples to workers so
that this didn't happen, I think it would fix this problem.

Neither (2) nor (3) seem terribly easy to implement so maybe we should
just go with (1), but I feel like that's not a very deep solution to
the problem.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

[1] https://www.postgresql.org/message-id/CAOGQiiOAhNPB7Ow8E4r3dAcLB8LEy_t_oznGeB8B2yQbsj7BFA%40mail.gmail.com


Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
Hi,

On 2017-11-15 13:48:18 -0500, Robert Haas wrote:
> I think that we need a little bit deeper analysis here to draw any
> firm conclusions.

Indeed.


> I suspect that one factor is that many of the queries actually send
> very few rows through the Gather.

Yep. I kinda wonder if the same result would present if the benchmarks
were run with parallel_leader_participation. The theory being what were
seing is just that the leader doesn't accept any tuples, and the large
queue size just helps because workers can run for longer.


Greetings,

Andres Freund


Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Thu, Nov 16, 2017 at 12:18 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Tue, Nov 14, 2017 at 7:31 AM, Rafia Sabih
> <rafia.sabih@enterprisedb.com> wrote:
> Similarly, I think that faster_gather_v3.patch is effectively here
> because it lets all the workers run at the same time, not because
> Gather gets any faster.  The local queue is 100x bigger than the
> shared queue, and that's big enough that the workers never have to
> block, so they all run at the same time and things are great.  I don't
> see much advantage in pursuing this route.  For the local queue to
> make sense it needs to have some advantage that we can't get by just
> making the shared queue bigger, which is easier and less code.
>

The main advantage of local queue idea is that it won't consume any
memory by default for running parallel queries.  It would consume
memory when required and accordingly help in speeding up those cases.
However, increasing the size of shared queues by default will increase
memory usage for cases where it is even not required.   Even, if we
provide a GUC to tune the amount of shared memory, I am not sure how
convenient it will be for the user to use it as it needs different
values for different workloads and it is not easy to make a general
recommendation.  I am not telling we can't work-around this with the
help of GUC, but it seems like it will be better if we have some
autotune mechanism and I think Rafia's patch is one way to achieve it.

>  The
> original idea was that we'd reduce latch traffic and spinlock
> contention by moving data from the local queue to the shared queue in
> bulk, but the patches I posted attack those problems more directly.
>

I think the idea was to solve both the problems (shm_mq communication
overhead and Gather Merge related pipeline stalls) with local queue
stuff [1].


[1] - https://www.postgresql.org/message-id/CAA4eK1Jk465W2TTWT4J-RP3RXK2bJWEtYY0xhYpnSc1mcEXfkA%40mail.gmail.com

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Wed, Nov 15, 2017 at 9:34 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> The main advantage of local queue idea is that it won't consume any
> memory by default for running parallel queries.  It would consume
> memory when required and accordingly help in speeding up those cases.
> However, increasing the size of shared queues by default will increase
> memory usage for cases where it is even not required.   Even, if we
> provide a GUC to tune the amount of shared memory, I am not sure how
> convenient it will be for the user to use it as it needs different
> values for different workloads and it is not easy to make a general
> recommendation.  I am not telling we can't work-around this with the
> help of GUC, but it seems like it will be better if we have some
> autotune mechanism and I think Rafia's patch is one way to achieve it.

It's true this might save memory in some cases.  If we never generate
very many tuples, then we won't allocate the local queue and we'll
save memory.  That's mildly nice.

On the other hand, the local queue may also use a bunch of memory
without improving performance, as in the case of Rafia's test where
she raised the queue size 10x and it didn't help.
Alternatively, it may improve performance by a lot, but use more
memory than necessary to do so.  In Rafia's test results, a 100x
improvement got it down to 7s; if she'd done 200x instead, I don't
think it would have helped further, but it would have been necessary
to go 200x to get the full benefit if the data had been twice as big.

The problem here is that we have no idea how big the queue needs to
be.  The workers will always be happy to generate tuples faster than
the leader can read them, if that's possible, but it will only
sometimes help performance to let them do so.   I think in most cases
we'll end up allocating the local queue - because the workers can
generate faster than the leader can read - but only occasionally will
it make anything faster.

If what we really want to do is allow the workers to get arbitrarily
far ahead of the leader, we could ditch shm_mq altogether here and use
Thomas's shared tuplestore stuff.  Then you never run out of memory
because you spill to disk.  I'm not sure that's the way to go, though.
It still has the problem that you may let the workers get very far
ahead not just when it helps, but also when it's possible but not
helpful.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] [POC] Faster processing at Gather node

From
Rafia Sabih
Date:
On Thu, Nov 16, 2017 at 12:18 AM, Robert Haas <robertmhaas@gmail.com> wrote:

> I suspect that one factor is that many of the queries actually send
> very few rows through the Gather.  You didn't send EXPLAIN ANALYZE
> outputs for these runs, but I went back and looked at some old tests I

Please find the attached zip for the same. The results are for head
and Case 2c. Since, there was no difference in plan or in performance
for the other cases except for Q12, I haven't kept the runs for each
of the cases mentioned upthread.

> Now obviously your plans are different -- otherwise you couldn't be
> seeing a speedup on Q12.  So you have to look at the plans and try to
> understand what the big picture is here.  Spending a lot of time
> running queries where the time taken by Gather is not the bottleneck
> is not a good way to figure out whether we've successfully sped up
> Gather.  What would be more useful?  How about:
>
For this scale factor, I found that the queries where gather or
gather-merge process relatively large number of rows were - Q2, Q3,
Q10, Q12, Q16, Q18, Q20, and Q21. However, as per the respective
explain analyse outputs, for all these queries except Q12, the
contribution of gather/gather-merge node individually in the total
execution time of the respective query is insignificant, so IMO we
can't expect any performance improvement from such cases for this set
of patches. We have already discussed the case of Q12 enough, so need
not to say anything about it here again.

> - Once you've identified the queries where Gather seems like it might
> be a bottleneck, run perf without the patch set and see whether Gather
> or shm_mq related functions show up high in the profile.  If they do,
> run perf which the patch set and see if they become less prominent.
>
Sure, I'll do that.

> - Try running the test cases that Andres and I tried with and without
> the patch set.  See if it helps on those queries.  That will help
> verify that your testing procedure is correct, and might also reveal
> differences in the effectiveness of that patch set on different
> hardware.
The only TPC-H query I could find upthread analysed by either you or Andres is,
explain analyze SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET
1000000000 LIMIT 1;

So, here are the results for it with the parameter settings as
suggested by Andres upthread,
set parallel_setup_cost=0;set parallel_tuple_cost=0;set
min_parallel_table_scan_size=0;set max_parallel_workers_per_gather=8;
with the addition of max_parallel_workers = 100, just to ensure that
it uses as many workers as it planned.

With the patch-set,
explain analyze SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET
1000000000 LIMIT 1;

QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=430530.95..430530.95 rows=1 width=129) (actual
time=57651.076..57651.076 rows=0 loops=1)
   ->  Gather  (cost=0.00..430530.95 rows=116888930 width=129) (actual
time=0.581..50528.386 rows=116988791 loops=1)
         Workers Planned: 8
         Workers Launched: 8
         ->  Parallel Seq Scan on lineitem  (cost=0.00..430530.95
rows=14611116 width=129) (actual time=0.015..3904.101 rows=12998755
loops=9)
               Filter: (l_suppkey > '5012'::bigint)
               Rows Removed by Filter: 333980
 Planning time: 0.143 ms
 Execution time: 57651.722 ms
(9 rows)

on head,
explain analyze SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET
1000000000 LIMIT 1;

QUERY PLAN

-----------------------------------------------------------------------------------------------------------------------------------------------
 Limit  (cost=430530.95..430530.95 rows=1 width=129) (actual
time=100024.995..100024.995 rows=0 loops=1)
   ->  Gather  (cost=0.00..430530.95 rows=116888930 width=129) (actual
time=0.282..93607.947 rows=116988791 loops=1)
         Workers Planned: 8
         Workers Launched: 8
         ->  Parallel Seq Scan on lineitem  (cost=0.00..430530.95
rows=14611116 width=129) (actual time=0.029..3866.321 rows=12998755
loops=9)
               Filter: (l_suppkey > '5012'::bigint)
               Rows Removed by Filter: 333980
 Planning time: 0.409 ms
 Execution time: 100025.303 ms
(9 rows)

So, there is a significant improvement in performance with the
patch-set. The only point that confuses me is that Andres mentioned
upthread,

EXPLAIN ANALYZE SELECT * FROM lineitem WHERE l_suppkey > '5012' OFFSET
1000000000 LIMIT 1;

┌────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│                                                                 QUERY PLAN

├────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ Limit  (cost=635802.67..635802.69 rows=1 width=127) (actual
time=5984.916..5984.916 rows=0 loops=1)
│   ->  Gather  (cost=0.00..635802.67 rows=27003243 width=127) (actual
time=0.214..5123.238 rows=26989780 loops=1)
│         Workers Planned: 8
│         Workers Launched: 7
│         ->  Parallel Seq Scan on lineitem  (cost=0.00..635802.67
rows=3375405 width=127) (actual time=0.025..649.887 rows=3373722
loops=8)
│               Filter: (l_suppkey > 5012)
│               Rows Removed by Filter: 376252
│ Planning time: 0.076 ms
│ Execution time: 5986.171 ms

└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
(9 rows)

so there clearly is still benefit (this is scale 100, but that shouldn't
make much of a difference).

In my tests, the scale factor is 20 and the number of rows in gather
is 116988791, however, for Andres it is 26989780, plus, the time taken
by query in 20 scale factor is some 100s without patch and for Andres
it is 8s. So, may be when Andres wrote scale 100 it is typo for scale
10 or what he meant by scale is not scale factor of TPC-H, in that
case I'd like to know what he meant there.

-- 
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/

Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Ants Aasma
Date:
On Thu, Nov 16, 2017 at 6:42 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> The problem here is that we have no idea how big the queue needs to
> be.  The workers will always be happy to generate tuples faster than
> the leader can read them, if that's possible, but it will only
> sometimes help performance to let them do so.   I think in most cases
> we'll end up allocating the local queue - because the workers can
> generate faster than the leader can read - but only occasionally will
> it make anything faster.

For the Gather Merge driven by Parallel Index Scan case it seems to me
that the correct queue size is one that can store two index pages
worth of tuples. Additional space will always help buffer any
performance variations, but there should be a step function somewhere
around 1+1/n_workers pages. I wonder if the queue could be dynamically
sized based on the driving scan. With some limits of course as parent
nodes to the parallel index scan can increase the row count by
arbitrary amounts.

Regards,
Ants Aasma
--
Cybertec Schönig & Schönig GmbH
Gröhrmühlgasse 26, A-2700 Wiener Neustadt
Web: http://www.postgresql-support.de, http://www.cybertec.at


Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Thu, Nov 16, 2017 at 10:23 AM, Ants Aasma <ants.aasma@eesti.ee> wrote:
> For the Gather Merge driven by Parallel Index Scan case it seems to me
> that the correct queue size is one that can store two index pages
> worth of tuples. Additional space will always help buffer any
> performance variations, but there should be a step function somewhere
> around 1+1/n_workers pages. I wonder if the queue could be dynamically
> sized based on the driving scan. With some limits of course as parent
> nodes to the parallel index scan can increase the row count by
> arbitrary amounts.

Currently, Gather Merge can store 100 tuples + as much more stuff as
fits in a 64kB queue.  That should already be more than 2 index pages,
I would think, although admittedly I haven't tested.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Fri, Nov 10, 2017 at 8:39 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Fri, Nov 10, 2017 at 5:44 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> I am seeing the assertion failure as below on executing the above
>> mentioned Create statement:
>>
>> TRAP: FailedAssertion("!(!(tup->t_data->t_infomask & 0x0008))", File:
>> "heapam.c", Line: 2634)
>> server closed the connection unexpectedly
>> This probably means the server terminated abnormally
>
> OK, I see it now.  Not sure why I couldn't reproduce this before.
>
> I think the problem is not actually with the code that I just wrote.
> What I'm seeing is that the slot descriptor's tdhasoid value is false
> for both the funnel slot and the result slot; therefore, we conclude
> that no projection is needed to remove the OIDs.  That seems to make
> sense: if the funnel slot doesn't have OIDs and the result slot
> doesn't have OIDs either, then we don't need to remove them.
> Unfortunately, even though the funnel slot descriptor is marked
> tdhashoid = false, the tuples being stored there actually do have
> OIDs.  And that is because they are coming from the underlying
> sequential scan, which *also* has OIDs despite the fact that tdhasoid
> for it's slot is false.
>
> This had me really confused until I realized that there are two
> processes involved.  The problem is that we don't pass eflags down to
> the child process -- so in the user backend, everybody agrees that
> there shouldn't be OIDs anywhere, because EXEC_FLAG_WITHOUT_OIDS is
> set.  In the parallel worker, however, it's not set, so the worker
> feels free to do whatever comes naturally, and in this test case that
> happens to be returning tuples with OIDs.  Patch for this attached.
>
> I also noticed that the code that initializes the funnel slot is using
> its own PlanState rather than the outer plan's PlanState to call
> ExecContextForcesOids.  I think that's formally incorrect, because the
> goal is to end up with a slot that is the same as the outer plan's
> slot.  It doesn't matter because ExecContextForcesOids doesn't care
> which PlanState it gets passed, but the comments in
> ExecContextForcesOids imply that somebody it might, so perhaps it's
> best to clean that up.  Patch for this attached, too.
>

- if (!ExecContextForcesOids(&gatherstate->ps, &hasoid))
+ if (!ExecContextForcesOids(outerPlanState(gatherstate), &hasoid))
  hasoid = false;

Don't we need a similar change in nodeGatherMerge.c (in function
ExecInitGatherMerge)?

> And here are the other patches again, too.
>

The 0001* patch doesn't apply, please find the attached rebased
version which I have used to verify the patch.

Now, along with 0001* and 0002*, 0003-skip-gather-project-v2 looks
good to me.  I think we can proceed with the commit of 0001*~0003*
patches unless somebody else has any comments.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Sat, Nov 18, 2017 at 7:23 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> On Fri, Nov 10, 2017 at 8:39 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> On Fri, Nov 10, 2017 at 5:44 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> I am seeing the assertion failure as below on executing the above
>>> mentioned Create statement:
>>
>
> - if (!ExecContextForcesOids(&gatherstate->ps, &hasoid))
> + if (!ExecContextForcesOids(outerPlanState(gatherstate), &hasoid))
>   hasoid = false;
>
> Don't we need a similar change in nodeGatherMerge.c (in function
> ExecInitGatherMerge)?
>
>> And here are the other patches again, too.
>>
>
> The 0001* patch doesn't apply, please find the attached rebased
> version which I have used to verify the patch.
>
> Now, along with 0001* and 0002*, 0003-skip-gather-project-v2 looks
> good to me.  I think we can proceed with the commit of 0001*~0003*
> patches unless somebody else has any comments.
>

I see that you have committed 0001* and 0002* patches, so continuing my review.

Review of 0006-remove-memory-leak-protection-v1

> remove-memory-leak-protection-v1.patch removes the memory leak
> protection that Tom installed upon discovering that the original
> version of tqueue.c leaked memory like crazy.  I think that it
> shouldn't do that any more, courtesy of
> 6b65a7fe62e129d5c2b85cd74d6a91d8f7564608.  Assuming that's correct, we
> can avoid a whole lot of tuple copying in Gather Merge and a much more
> modest amount of overhead in Gather.  Since my test case exercised
> Gather Merge, this bought ~400 ms or so.

I think Tom didn't installed memory protection in nodeGatherMerge.c
related to an additional copy of tuple.  I could see it is present in
the original commit of Gather Merge
(355d3993c53ed62c5b53d020648e4fbcfbf5f155).  Tom just avoided applying
heap_copytuple to a null tuple in his commit
04e9678614ec64ad9043174ac99a25b1dc45233a.  I am not sure whether you
are just referring to the protection Tom added in nodeGather.c,  If
so, it is not clear from your mail.

I think we don't need the additional copy of tuple in
nodeGatherMerge.c and your patch seem to be doing the right thing.
However, after your changes, it looks slightly odd that
gather_merge_clear_tuples() is explicitly calling heap_freetuple for
the tuples allocated by tqueue.c, maybe we can add some comment.  It
was much clear before this patch as nodeGatherMerge.c was explicitly
copying the tuples that it is freeing.

Isn't it better to explicitly call gather_merge_clear_tuples in
ExecEndGatherMerge so that we can free the memory for tuples allocated
in this node rather than relying on reset/free of MemoryContext in
which those tuples are allocated?

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] [POC] Faster processing at Gather node

From
Rafia Sabih
Date:
On Thu, Nov 16, 2017 at 12:24 AM, Andres Freund <andres@anarazel.de> wrote:
> Hi,
>
> On 2017-11-15 13:48:18 -0500, Robert Haas wrote:
>> I think that we need a little bit deeper analysis here to draw any
>> firm conclusions.
>
> Indeed.
>
>
>> I suspect that one factor is that many of the queries actually send
>> very few rows through the Gather.
>
> Yep. I kinda wonder if the same result would present if the benchmarks
> were run with parallel_leader_participation. The theory being what were
> seing is just that the leader doesn't accept any tuples, and the large
> queue size just helps because workers can run for longer.
>
I ran Q12 with parallel_leader_participation = off and could not get
any performance improvement with the patches given by Robert.The
result was same for head as well. The query plan also remain
unaffected with the value of this parameter.

Here are the details of the experiment,
TPC-H scale factor = 20,
work_mem = 1GB
random_page_cost = seq_page_cost = 0.1
max_parallel_workers_per_gather = 4

PG commit: 745948422c799c1b9f976ee30f21a7aac050e0f3

Please find the attached file for the explain analyse output for
either values of parallel_leader_participation and patches.
-- 
Regards,
Rafia Sabih
EnterpriseDB: http://www.enterprisedb.com/

Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Wed, Nov 22, 2017 at 8:36 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> remove-memory-leak-protection-v1.patch removes the memory leak
>> protection that Tom installed upon discovering that the original
>> version of tqueue.c leaked memory like crazy.  I think that it
>> shouldn't do that any more, courtesy of
>> 6b65a7fe62e129d5c2b85cd74d6a91d8f7564608.  Assuming that's correct, we
>> can avoid a whole lot of tuple copying in Gather Merge and a much more
>> modest amount of overhead in Gather.  Since my test case exercised
>> Gather Merge, this bought ~400 ms or so.
>
> I think Tom didn't installed memory protection in nodeGatherMerge.c
> related to an additional copy of tuple.  I could see it is present in
> the original commit of Gather Merge
> (355d3993c53ed62c5b53d020648e4fbcfbf5f155).  Tom just avoided applying
> heap_copytuple to a null tuple in his commit
> 04e9678614ec64ad9043174ac99a25b1dc45233a.  I am not sure whether you
> are just referring to the protection Tom added in nodeGather.c,  If
> so, it is not clear from your mail.

Yes, that's what I mean.  What got done for Gather Merge was motivated
by what Tom did for Gather.  Sorry for not expressing the idea more
precisely.

> I think we don't need the additional copy of tuple in
> nodeGatherMerge.c and your patch seem to be doing the right thing.
> However, after your changes, it looks slightly odd that
> gather_merge_clear_tuples() is explicitly calling heap_freetuple for
> the tuples allocated by tqueue.c, maybe we can add some comment.  It
> was much clear before this patch as nodeGatherMerge.c was explicitly
> copying the tuples that it is freeing.

OK, I can add a comment about that.

> Isn't it better to explicitly call gather_merge_clear_tuples in
> ExecEndGatherMerge so that we can free the memory for tuples allocated
> in this node rather than relying on reset/free of MemoryContext in
> which those tuples are allocated?

Generally relying on reset/free of a MemoryContext is cheaper.
Typically you only want to free manually if the freeing would
otherwise not happen soon enough.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Sat, Nov 25, 2017 at 9:13 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Wed, Nov 22, 2017 at 8:36 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>>> remove-memory-leak-protection-v1.patch removes the memory leak
>>> protection that Tom installed upon discovering that the original
>>> version of tqueue.c leaked memory like crazy.  I think that it
>>> shouldn't do that any more, courtesy of
>>> 6b65a7fe62e129d5c2b85cd74d6a91d8f7564608.  Assuming that's correct, we
>>> can avoid a whole lot of tuple copying in Gather Merge and a much more
>>> modest amount of overhead in Gather.  Since my test case exercised
>>> Gather Merge, this bought ~400 ms or so.
>>
>> I think Tom didn't installed memory protection in nodeGatherMerge.c
>> related to an additional copy of tuple.  I could see it is present in
>> the original commit of Gather Merge
>> (355d3993c53ed62c5b53d020648e4fbcfbf5f155).  Tom just avoided applying
>> heap_copytuple to a null tuple in his commit
>> 04e9678614ec64ad9043174ac99a25b1dc45233a.  I am not sure whether you
>> are just referring to the protection Tom added in nodeGather.c,  If
>> so, it is not clear from your mail.
>
> Yes, that's what I mean.  What got done for Gather Merge was motivated
> by what Tom did for Gather.  Sorry for not expressing the idea more
> precisely.
>
>> I think we don't need the additional copy of tuple in
>> nodeGatherMerge.c and your patch seem to be doing the right thing.
>> However, after your changes, it looks slightly odd that
>> gather_merge_clear_tuples() is explicitly calling heap_freetuple for
>> the tuples allocated by tqueue.c, maybe we can add some comment.  It
>> was much clear before this patch as nodeGatherMerge.c was explicitly
>> copying the tuples that it is freeing.
>
> OK, I can add a comment about that.
>

Sure, I think apart from that the patch looks good to me.  I think a
good test of this patch could be to try to pass many tuples via gather
merge and see if there is any leak in memory.  Do you have any other
ideas?

>> Isn't it better to explicitly call gather_merge_clear_tuples in
>> ExecEndGatherMerge so that we can free the memory for tuples allocated
>> in this node rather than relying on reset/free of MemoryContext in
>> which those tuples are allocated?
>
> Generally relying on reset/free of a MemoryContext is cheaper.
> Typically you only want to free manually if the freeing would
> otherwise not happen soon enough.
>

Yeah and I think something like that can happen after your patch
because now the memory for tuples returned via TupleQueueReaderNext
will be allocated in ExecutorState and that can last for long.   I
think it is better to free memory, but we can leave it as well if you
don't feel it important.  In any case, I have written a patch, see if
you think it makes sense.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Michael Paquier
Date:
On Sun, Nov 26, 2017 at 5:15 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Yeah and I think something like that can happen after your patch
> because now the memory for tuples returned via TupleQueueReaderNext
> will be allocated in ExecutorState and that can last for long.   I
> think it is better to free memory, but we can leave it as well if you
> don't feel it important.  In any case, I have written a patch, see if
> you think it makes sense.

OK. I can see some fresh and unreviewed patches so moved to next CF.
-- 
Michael


Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Sun, Nov 26, 2017 at 3:15 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> Yeah and I think something like that can happen after your patch
> because now the memory for tuples returned via TupleQueueReaderNext
> will be allocated in ExecutorState and that can last for long.   I
> think it is better to free memory, but we can leave it as well if you
> don't feel it important.  In any case, I have written a patch, see if
> you think it makes sense.

Well, I don't really know.  My intuition is that in most cases after
ExecShutdownGatherMergeWorkers() we will very shortly thereafter call
ExecutorEnd() and everything will go away.  Maybe that's wrong, but
Tom put that call where it is in
2d44c58c79aeef2d376be0141057afbb9ec6b5bc, and he could have put it
inside ExecShutdownGatherMergeWorkers() instead.  Now maybe he didn't
consider that approach, but Tom is usually smart about stuff like
that...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] [POC] Faster processing at Gather node

From
Amit Kapila
Date:
On Fri, Dec 1, 2017 at 8:04 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sun, Nov 26, 2017 at 3:15 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
>> Yeah and I think something like that can happen after your patch
>> because now the memory for tuples returned via TupleQueueReaderNext
>> will be allocated in ExecutorState and that can last for long.   I
>> think it is better to free memory, but we can leave it as well if you
>> don't feel it important.  In any case, I have written a patch, see if
>> you think it makes sense.
>
> Well, I don't really know.  My intuition is that in most cases after
> ExecShutdownGatherMergeWorkers() we will very shortly thereafter call
> ExecutorEnd() and everything will go away.
>

I thought there are some cases (though less) where we want to Shutdown
the nodes (ExecShutdownNode) earlier and release the resources sooner.
However, if you are not completely sure about this change, then we can
leave it as it.  Thanks for sharing your thoughts.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Sun, Dec 3, 2017 at 10:30 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> I thought there are some cases (though less) where we want to Shutdown
> the nodes (ExecShutdownNode) earlier and release the resources sooner.
> However, if you are not completely sure about this change, then we can
> leave it as it.  Thanks for sharing your thoughts.

OK, thanks.  I committed that patch, after first running 100 million
tuples through a Gather over and over again to test for leaks.
Hopefully I haven't missed anything here, but it looks like it's
solid.  Here once again are the remaining patches.  While the
already-committed patches are nice, these two are the ones that
actually produced big improvements in my testing, so it would be good
to move them along.  Any reviews appreciated.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Rafia Sabih
Date:


On Mon, Dec 4, 2017 at 9:20 PM, Robert Haas <robertmhaas@gmail.com> wrote:
On Sun, Dec 3, 2017 at 10:30 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> I thought there are some cases (though less) where we want to Shutdown
> the nodes (ExecShutdownNode) earlier and release the resources sooner.
> However, if you are not completely sure about this change, then we can
> leave it as it.  Thanks for sharing your thoughts.

OK, thanks.  I committed that patch, after first running 100 million
tuples through a Gather over and over again to test for leaks.
Hopefully I haven't missed anything here, but it looks like it's
solid.  Here once again are the remaining patches.  While the
already-committed patches are nice, these two are the ones that

Hi,
I was spending sometime in verifying this memory-leak patch for gather-merge case and I too found it good. In the query I tried, around 10 million tuples were passed through gather-merge. On analysing the output of top it looks acceptable memory usage and it gets freed once the query is completed. Since, I was trying on my local system only, I tested for upto 8 workers and didn't find any memory leaks for the queries I tried.
One may find the attached file for the test-case.

--
Regards,
Rafia Sabih
Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
Hi,

On 2017-12-04 10:50:53 -0500, Robert Haas wrote:
> Subject: [PATCH 1/2] shm-mq-less-spinlocks-v2


> + * mq_sender and mq_bytes_written can only be changed by the sender.
> + * mq_receiver and mq_sender are protected by mq_mutex, although, importantly,
> + * they cannot change once set, and thus may be read without a lock once this
> + * is known to be the case.

I don't recall our conversation around this anymore, and haven't read
down far enough to see the relevant code. Lest I forget: Such construct
often need careful use of barriers.


> - * mq_detached can be set by either the sender or the receiver, so the mutex
> - * must be held to read or write it.  Memory barriers could be used here as
> - * well, if needed.
> + * mq_bytes_read and mq_bytes_written are not protected by the mutex.  Instead,
> + * they are written atomically using 8 byte loads and stores.  Memory barriers
> + * must be carefully used to synchronize reads and writes of these values with
> + * reads and writes of the actual data in mq_ring.
> + *
> + * mq_detached needs no locking.  It can be set by either the sender or the
> + * receiver, but only ever from false to true, so redundant writes don't
> + * matter.  It is important that if we set mq_detached and then set the
> + * counterparty's latch, the counterparty must be certain to see the change
> + * after waking up.  Since SetLatch begins with a memory barrier and ResetLatch
> + * ends with one, this should be OK.

s/should/is/ or similar?


Perhaps a short benchmark for 32bit systems using shm_mq wouldn't hurt?
I suspect there won't be much of a performance impact, but it's probably
worth checking.


>   * mq_ring_size and mq_ring_offset never change after initialization, and
>   * can therefore be read without the lock.
>   *
> - * Importantly, mq_ring can be safely read and written without a lock.  Were
> - * this not the case, we'd have to hold the spinlock for much longer
> - * intervals, and performance might suffer.  Fortunately, that's not
> - * necessary.  At any given time, the difference between mq_bytes_read and
> + * At any given time, the difference between mq_bytes_read and

Hm, why did you remove the first part about mq_ring itself?


> @@ -848,18 +868,19 @@ shm_mq_send_bytes(shm_mq_handle *mqh, Size nbytes, const void *data,
>  
>      while (sent < nbytes)
>      {
> -        bool        detached;
>          uint64        rb;
> +        uint64        wb;
>  
>          /* Compute number of ring buffer bytes used and available. */
> -        rb = shm_mq_get_bytes_read(mq, &detached);
> -        Assert(mq->mq_bytes_written >= rb);
> -        used = mq->mq_bytes_written - rb;
> +        rb = pg_atomic_read_u64(&mq->mq_bytes_read);
> +        wb = pg_atomic_read_u64(&mq->mq_bytes_written);
> +        Assert(wb >= rb);
> +        used = wb - rb;

Just to make sure my understanding is correct: No barriers needed here
because "bytes_written" is only written by the sending backend, and
"bytes_read" cannot lap it. Correct?


>          Assert(used <= ringsize);
>          available = Min(ringsize - used, nbytes - sent);
>  
>          /* Bail out if the queue has been detached. */
> -        if (detached)
> +        if (mq->mq_detached)

Hm, do all paths here guarantee that mq->mq_detached won't be stored on
the stack / register in the first iteration, and then not reread any
further? I think it's fine because every branch of the if below ends up
in a syscall / barrier, but it might be worth noting on that here.


> +            /*
> +             * Since mq->mqh_counterparty_attached is known to be true at this
> +             * point, mq_receiver has been set, and it can't change once set.
> +             * Therefore, we can read it without acquiring the spinlock.
> +             */
> +            Assert(mqh->mqh_counterparty_attached);
> +            SetLatch(&mq->mq_receiver->procLatch);

Perhaps mention that this could lead to spuriously signalling the wrong
backend in case of detach, but that that is fine?

>              /* Skip manipulation of our latch if nowait = true. */
>              if (nowait)
> @@ -934,10 +953,18 @@ shm_mq_send_bytes(shm_mq_handle *mqh, Size nbytes, const void *data,
>          }
>          else
>          {
> -            Size        offset = mq->mq_bytes_written % (uint64) ringsize;
> -            Size        sendnow = Min(available, ringsize - offset);
> +            Size        offset;
> +            Size        sendnow;
> +
> +            offset = wb % (uint64) ringsize;
> +            sendnow = Min(available, ringsize - offset);

I know the scheme isn't new, but I do find it not immediately obvious
that 'wb' is short for 'bytes_written'.


> -            /* Write as much data as we can via a single memcpy(). */
> +            /*
> +             * Write as much data as we can via a single memcpy(). Make sure
> +             * these writes happen after the read of mq_bytes_read, above.
> +             * This barrier pairs with the one in shm_mq_inc_bytes_read.
> +             */

s/above/above. Otherwise a newer mq_bytes_read could become visible
before the corresponding reads have fully finished./?

Could you also add a comment as to why you think a read barrier isn't
sufficient? IIUC that's the case because we need to prevent reordering
in both directions: Can't neither start reading based on a "too new"
bytes_read, nor can affort writes to mq_ring being reordered to before
the barrier. Correct?


> +            pg_memory_barrier();
>              memcpy(&mq->mq_ring[mq->mq_ring_offset + offset],
>                     (char *) data + sent, sendnow);
>              sent += sendnow;

Btw, this mq_ring_offset stuff seems a bit silly, why don't we use
proper padding/union in the struct to make it unnecessary to do that bit
of offset calculation every time? I think it currently prevents
efficient address calculation instructions from being used.


> From 666d33a363036a647dde83cb28b9d7ad0b31f76c Mon Sep 17 00:00:00 2001
> From: Robert Haas <rhaas@postgresql.org>
> Date: Sat, 4 Nov 2017 19:03:03 +0100
> Subject: [PATCH 2/2] shm-mq-reduce-receiver-latch-set-v1

> -    /* Consume any zero-copy data from previous receive operation. */
> -    if (mqh->mqh_consume_pending > 0)
> +    /*
> +     * If we've consumed an amount of data greater than 1/4th of the ring
> +     * size, mark it consumed in shared memory.  We try to avoid doing this
> +     * unnecessarily when only a small amount of data has been consumed,
> +     * because SetLatch() is fairly expensive and we don't want to do it
> +     * too often.
> +     */
> +    if (mqh->mqh_consume_pending > mq->mq_ring_size / 4)
>      {

Hm. Why are we doing this at the level of updating the variables, rather
than SetLatch calls?

Greetings,

Andres Freund


Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Tue, Jan 9, 2018 at 7:09 PM, Andres Freund <andres@anarazel.de> wrote:
>> + * mq_sender and mq_bytes_written can only be changed by the sender.
>> + * mq_receiver and mq_sender are protected by mq_mutex, although, importantly,
>> + * they cannot change once set, and thus may be read without a lock once this
>> + * is known to be the case.
>
> I don't recall our conversation around this anymore, and haven't read
> down far enough to see the relevant code. Lest I forget: Such construct
> often need careful use of barriers.

I think the only thing the code assumes here is that if we previously
read the value with the spinlock and didn't get NULL, we can later
read the value without the spinlock and count on seeing the same value
we saw previously.  I think that's safe enough.

> s/should/is/ or similar?

I prefer it the way that I have it.

> Perhaps a short benchmark for 32bit systems using shm_mq wouldn't hurt?
> I suspect there won't be much of a performance impact, but it's probably
> worth checking.

I don't think I understand your concern here.  If this is used on a
system where we're emulating atomics and barriers in painful ways, it
might hurt performance, but I think we have a policy of not caring.

Also, I don't know where I'd find a 32-bit system to test.

>> - * Importantly, mq_ring can be safely read and written without a lock.  Were
>> - * this not the case, we'd have to hold the spinlock for much longer
>> - * intervals, and performance might suffer.  Fortunately, that's not
>> - * necessary.  At any given time, the difference between mq_bytes_read and
>> + * At any given time, the difference between mq_bytes_read and
>
> Hm, why did you remove the first part about mq_ring itself?

Bad editing.  Restored.

>> @@ -848,18 +868,19 @@ shm_mq_send_bytes(shm_mq_handle *mqh, Size nbytes, const void *data,
>>
>>       while (sent < nbytes)
>>       {
>> -             bool            detached;
>>               uint64          rb;
>> +             uint64          wb;
>>
>>               /* Compute number of ring buffer bytes used and available. */
>> -             rb = shm_mq_get_bytes_read(mq, &detached);
>> -             Assert(mq->mq_bytes_written >= rb);
>> -             used = mq->mq_bytes_written - rb;
>> +             rb = pg_atomic_read_u64(&mq->mq_bytes_read);
>> +             wb = pg_atomic_read_u64(&mq->mq_bytes_written);
>> +             Assert(wb >= rb);
>> +             used = wb - rb;
>
> Just to make sure my understanding is correct: No barriers needed here
> because "bytes_written" is only written by the sending backend, and
> "bytes_read" cannot lap it. Correct?

We can't possibly read a stale value of mq_bytes_written because we
are the only process that can write it.  It's possible that the
receiver has just increased mq_bytes_read and that the change isn't
visible to us yet, but if so, the sender's also going to set our
latch, or has done so already.  So the worst thing that happens is
that we decide to sleep because it looks like no space is available
and almost immediately get woken up because there really is space.

>>               Assert(used <= ringsize);
>>               available = Min(ringsize - used, nbytes - sent);
>>
>>               /* Bail out if the queue has been detached. */
>> -             if (detached)
>> +             if (mq->mq_detached)
>
> Hm, do all paths here guarantee that mq->mq_detached won't be stored on
> the stack / register in the first iteration, and then not reread any
> further? I think it's fine because every branch of the if below ends up
> in a syscall / barrier, but it might be worth noting on that here.

Aargh.  I hate compilers.  I added a comment.  Should I think about
changing mq_detached to pg_atomic_uint32 instead?

> Perhaps mention that this could lead to spuriously signalling the wrong
> backend in case of detach, but that that is fine?

I think that's a general risk of latches that doesn't need to be
specifically recapitulated here.

> I know the scheme isn't new, but I do find it not immediately obvious
> that 'wb' is short for 'bytes_written'.

Sorry.

>> -                     /* Write as much data as we can via a single memcpy(). */
>> +                     /*
>> +                      * Write as much data as we can via a single memcpy(). Make sure
>> +                      * these writes happen after the read of mq_bytes_read, above.
>> +                      * This barrier pairs with the one in shm_mq_inc_bytes_read.
>> +                      */
>
> s/above/above. Otherwise a newer mq_bytes_read could become visible
> before the corresponding reads have fully finished./?

I don't find that very clear.  A newer mq_bytes_read could become
visible whenever, and a barrier doesn't prevent that from happening.
What it does is ensure (together with the one in
shm_mq_inc_bytes_read) that we don't try to read bytes that aren't
fully *written* yet.

Generally, my mental model is that barriers make things happen in
program order rather than some other order that the CPU thinks would
be fun.  Imagine a single-core server doing all of this stuff the "old
school" way.  If the reader puts data into the queue before
advertising its presence and the writer finishes using the data from
the queue before advertising its consumption, then everything works.
If you do anything else, it's flat busted, even on that single-core
system, because a context switch could happen at any time, and then
you might read data that isn't written yet.  The barrier just ensures
that we get that order of execution even on fancy modern hardware, but
I'm not sure how much of that we really need to explain here.

> Could you also add a comment as to why you think a read barrier isn't
> sufficient? IIUC that's the case because we need to prevent reordering
> in both directions: Can't neither start reading based on a "too new"
> bytes_read, nor can affort writes to mq_ring being reordered to before
> the barrier. Correct?

I can't parse that statement.  We're separating from the read of
mqh_read_bytes from the write to mqh_ring.  My understanding is that a
read barrier can separate two reads, a write barrier can separate two
writes, and a full barrier is needed to separate a write from a read
in either order.  Added a comment to that effect.

>> +                     pg_memory_barrier();
>>                       memcpy(&mq->mq_ring[mq->mq_ring_offset + offset],
>>                                  (char *) data + sent, sendnow);
>>                       sent += sendnow;
>
> Btw, this mq_ring_offset stuff seems a bit silly, why don't we use
> proper padding/union in the struct to make it unnecessary to do that bit
> of offset calculation every time? I think it currently prevents
> efficient address calculation instructions from being used.

Well, the root cause -- aside from me being a fallible human being
with only limited programing skills -- is that I wanted the parallel
query code to be able to request whatever queue size it preferred
without having to worry about how many bytes of that space was going
to get consumed by overhead.  But it would certainly be possible to
change it up, if somebody felt like working out how the API should be
set up.  I don't really want to do that right now, though.

>> From 666d33a363036a647dde83cb28b9d7ad0b31f76c Mon Sep 17 00:00:00 2001
>> From: Robert Haas <rhaas@postgresql.org>
>> Date: Sat, 4 Nov 2017 19:03:03 +0100
>> Subject: [PATCH 2/2] shm-mq-reduce-receiver-latch-set-v1
>
>> -     /* Consume any zero-copy data from previous receive operation. */
>> -     if (mqh->mqh_consume_pending > 0)
>> +     /*
>> +      * If we've consumed an amount of data greater than 1/4th of the ring
>> +      * size, mark it consumed in shared memory.  We try to avoid doing this
>> +      * unnecessarily when only a small amount of data has been consumed,
>> +      * because SetLatch() is fairly expensive and we don't want to do it
>> +      * too often.
>> +      */
>> +     if (mqh->mqh_consume_pending > mq->mq_ring_size / 4)
>>       {
>
> Hm. Why are we doing this at the level of updating the variables, rather
> than SetLatch calls?

Hmm, I'm not sure I understand what you're suggesting, here.  In
general, we return with the data for the current message unconsumed,
and then consume it the next time the function is called, so that
(except when the message wraps the end of the buffer) we can return a
pointer directly into the buffer rather than having to memcpy().  What
this patch does is postpone consuming the data further, either until
we can free up at least a quarter of the ring buffer or until we need
to wait for more data. It seemed worthwhile to free up space in the
ring buffer occasionally even if we weren't to the point of waiting
yet, so that the sender has an opportunity to write new data into that
space if it happens to still be running.

Slightly revised patches attached.  0002 is unchanged except for being
made pgindent-clean.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
Hi,

On 2018-01-25 12:09:23 -0500, Robert Haas wrote:
> > Perhaps a short benchmark for 32bit systems using shm_mq wouldn't hurt?
> > I suspect there won't be much of a performance impact, but it's probably
> > worth checking.
>
> I don't think I understand your concern here.  If this is used on a
> system where we're emulating atomics and barriers in painful ways, it
> might hurt performance, but I think we have a policy of not caring.

Well, it's more than just systems like that - for 64bit atomics we
sometimes do fall back to spinlock based atomics on 32bit systems, even
if they support 32 bit atomics.


> Also, I don't know where I'd find a 32-bit system to test.

You can compile with -m32 on reasonable systems ;)


> >>               Assert(used <= ringsize);
> >>               available = Min(ringsize - used, nbytes - sent);
> >>
> >>               /* Bail out if the queue has been detached. */
> >> -             if (detached)
> >> +             if (mq->mq_detached)
> >
> > Hm, do all paths here guarantee that mq->mq_detached won't be stored on
> > the stack / register in the first iteration, and then not reread any
> > further? I think it's fine because every branch of the if below ends up
> > in a syscall / barrier, but it might be worth noting on that here.
>
> Aargh.  I hate compilers.  I added a comment.  Should I think about
> changing mq_detached to pg_atomic_uint32 instead?

I think a pg_compiler_barrier() would suffice to alleviate my concern,
right? If you wanted to go for an atomic, using pg_atomic_flag would
probably be more appropriate than pg_atomic_uint32.


> >> -                     /* Write as much data as we can via a single memcpy(). */
> >> +                     /*
> >> +                      * Write as much data as we can via a single memcpy(). Make sure
> >> +                      * these writes happen after the read of mq_bytes_read, above.
> >> +                      * This barrier pairs with the one in shm_mq_inc_bytes_read.
> >> +                      */
> >
> > s/above/above. Otherwise a newer mq_bytes_read could become visible
> > before the corresponding reads have fully finished./?
>
> I don't find that very clear.  A newer mq_bytes_read could become
> visible whenever, and a barrier doesn't prevent that from happening.

Well, my point was that the barrier prevents the the write to
mq_bytes_read becoming visible before the corresponding reads have
finished. Which then would mean the memcpy() could overwrite them. And a
barrier *does* prevent that from happening.

I don't think this is the same as:

> What it does is ensure (together with the one in
> shm_mq_inc_bytes_read) that we don't try to read bytes that aren't
> fully *written* yet.

which seems much more about the barrier in shm_mq_inc_bytes_written()?


> Generally, my mental model is that barriers make things happen in
> program order rather than some other order that the CPU thinks would
> be fun.  Imagine a single-core server doing all of this stuff the "old
> school" way.  If the reader puts data into the queue before
> advertising its presence and the writer finishes using the data from
> the queue before advertising its consumption, then everything works.
> If you do anything else, it's flat busted, even on that single-core
> system, because a context switch could happen at any time, and then
> you might read data that isn't written yet.  The barrier just ensures
> that we get that order of execution even on fancy modern hardware, but
> I'm not sure how much of that we really need to explain here.

IDK, I find it nontrivial to understand individual uses of
barriers. There's often multiple non isometric ways to use barriers, and
the logic why a specific one is correct isn't always obvious.


> >> +                     pg_memory_barrier();
> >>                       memcpy(&mq->mq_ring[mq->mq_ring_offset + offset],
> >>                                  (char *) data + sent, sendnow);
> >>                       sent += sendnow;
> >
> > Btw, this mq_ring_offset stuff seems a bit silly, why don't we use
> > proper padding/union in the struct to make it unnecessary to do that bit
> > of offset calculation every time? I think it currently prevents
> > efficient address calculation instructions from being used.
>
> Well, the root cause -- aside from me being a fallible human being
> with only limited programing skills -- is that I wanted the parallel
> query code to be able to request whatever queue size it preferred
> without having to worry about how many bytes of that space was going
> to get consumed by overhead.

What I meant is that instead of doing
struct shm_mq
{
    ...
    bool        mq_detached;
    uint8        mq_ring_offset;
    char        mq_ring[FLEXIBLE_ARRAY_MEMBER];
};

it'd be possible to do something like

{
...
    bool        mq_detached;
    union {
        char        mq_ring[FLEXIBLE_ARRAY_MEMBER];
        double        forcealign;
    } d;
};

which'd force the struct to be laid out so mq_ring is at a suitable
offset. We use that in a bunch of places.

As far as I understand that'd not run counter to your goals of:
> without having to worry about how many bytes of that space was going
> to get consumed by overhead.

right?


> change it up, if somebody felt like working out how the API should be
> set up.  I don't really want to do that right now, though.

Right.


> >> From 666d33a363036a647dde83cb28b9d7ad0b31f76c Mon Sep 17 00:00:00 2001
> >> From: Robert Haas <rhaas@postgresql.org>
> >> Date: Sat, 4 Nov 2017 19:03:03 +0100
> >> Subject: [PATCH 2/2] shm-mq-reduce-receiver-latch-set-v1
> >
> >> -     /* Consume any zero-copy data from previous receive operation. */
> >> -     if (mqh->mqh_consume_pending > 0)
> >> +     /*
> >> +      * If we've consumed an amount of data greater than 1/4th of the ring
> >> +      * size, mark it consumed in shared memory.  We try to avoid doing this
> >> +      * unnecessarily when only a small amount of data has been consumed,
> >> +      * because SetLatch() is fairly expensive and we don't want to do it
> >> +      * too often.
> >> +      */
> >> +     if (mqh->mqh_consume_pending > mq->mq_ring_size / 4)
> >>       {
> >
> > Hm. Why are we doing this at the level of updating the variables, rather
> > than SetLatch calls?
>
> Hmm, I'm not sure I understand what you're suggesting, here.  In
> general, we return with the data for the current message unconsumed,
> and then consume it the next time the function is called, so that
> (except when the message wraps the end of the buffer) we can return a
> pointer directly into the buffer rather than having to memcpy().  What
> this patch does is postpone consuming the data further, either until
> we can free up at least a quarter of the ring buffer or until we need
> to wait for more data. It seemed worthwhile to free up space in the
> ring buffer occasionally even if we weren't to the point of waiting
> yet, so that the sender has an opportunity to write new data into that
> space if it happens to still be running.

What I'm trying to suggest is that instead of postponing an update of
mq_bytes_read (by storing amount of already performed reads in
mqh_consume_pending), we continue to update mq_bytes_read but only set
the latch if your above thresholds are crossed. That way a burst of
writes can fully fill the ringbuffer, but the cost of doing a SetLatch()
is amortized. In my testing SetLatch() was the expensive part, not the
necessary barriers in shm_mq_inc_bytes_read().

- Andres


Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Wed, Feb 7, 2018 at 1:41 PM, Andres Freund <andres@anarazel.de> wrote:
> Well, it's more than just systems like that - for 64bit atomics we
> sometimes do fall back to spinlock based atomics on 32bit systems, even
> if they support 32 bit atomics.

I built with -m32 on my laptop and tried "select aid, count(*) from
pgbench_accounts group by 1 having count(*) > 1" on pgbench at scale
factor 100 with pgbench_accounts_pkey dropped and
max_parallel_workers_per_gather set to 10 on (a) commit
0b5e33f667a2042d7022da8bef31a8be5937aad1 (I know this is a little old,
but I think it doesn't matter), (b) same plus
shm-mq-less-spinlocks-v3, and (c) same plus shm-mq-less-spinlocks-v3
and shm-mq-reduce-receiver-latch-set-v2.

(a) 16563.790 ms, 16625.257 ms, 16496.062 ms
(b) 17217.051 ms, 17157.745 ms, 17225.755 ms [median to median +3.9% vs. (a)]
(c) 15491.947 ms, 15455.840 ms, 15452.649 ms [median to median -7.0%
vs. (a), -10.2% vs (b)]

Do you think that's a problem?  If it is, what do you think we should
do about it?  It seems to me that it's probably OK because (1) with
both patches we still come out ahead and (2) 32-bit systems will
presumably continue to become rarer as time goes on, but you might
disagree.

>> > Hm, do all paths here guarantee that mq->mq_detached won't be stored on
>> > the stack / register in the first iteration, and then not reread any
>> > further? I think it's fine because every branch of the if below ends up
>> > in a syscall / barrier, but it might be worth noting on that here.
>>
>> Aargh.  I hate compilers.  I added a comment.  Should I think about
>> changing mq_detached to pg_atomic_uint32 instead?
>
> I think a pg_compiler_barrier() would suffice to alleviate my concern,
> right? If you wanted to go for an atomic, using pg_atomic_flag would
> probably be more appropriate than pg_atomic_uint32.

Hmm, all right, I'll add pg_compiler_barrier().

>> >> -                     /* Write as much data as we can via a single memcpy(). */
>> >> +                     /*
>> >> +                      * Write as much data as we can via a single memcpy(). Make sure
>> >> +                      * these writes happen after the read of mq_bytes_read, above.
>> >> +                      * This barrier pairs with the one in shm_mq_inc_bytes_read.
>> >> +                      */
>> >
>> > s/above/above. Otherwise a newer mq_bytes_read could become visible
>> > before the corresponding reads have fully finished./?
>>
>> I don't find that very clear.  A newer mq_bytes_read could become
>> visible whenever, and a barrier doesn't prevent that from happening.
>
> Well, my point was that the barrier prevents the the write to
> mq_bytes_read becoming visible before the corresponding reads have
> finished. Which then would mean the memcpy() could overwrite them. And a
> barrier *does* prevent that from happening.

I think we're talking about the same thing, but not finding each
others' explanations very clear.

>> Hmm, I'm not sure I understand what you're suggesting, here.  In
>> general, we return with the data for the current message unconsumed,
>> and then consume it the next time the function is called, so that
>> (except when the message wraps the end of the buffer) we can return a
>> pointer directly into the buffer rather than having to memcpy().  What
>> this patch does is postpone consuming the data further, either until
>> we can free up at least a quarter of the ring buffer or until we need
>> to wait for more data. It seemed worthwhile to free up space in the
>> ring buffer occasionally even if we weren't to the point of waiting
>> yet, so that the sender has an opportunity to write new data into that
>> space if it happens to still be running.
>
> What I'm trying to suggest is that instead of postponing an update of
> mq_bytes_read (by storing amount of already performed reads in
> mqh_consume_pending), we continue to update mq_bytes_read but only set
> the latch if your above thresholds are crossed. That way a burst of
> writes can fully fill the ringbuffer, but the cost of doing a SetLatch()
> is amortized. In my testing SetLatch() was the expensive part, not the
> necessary barriers in shm_mq_inc_bytes_read().

OK, I'll try to check how feasible that would be.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] [POC] Faster processing at Gather node

From
Andres Freund
Date:
Hi,

On 2018-02-27 16:03:17 -0500, Robert Haas wrote:
> On Wed, Feb 7, 2018 at 1:41 PM, Andres Freund <andres@anarazel.de> wrote:
> > Well, it's more than just systems like that - for 64bit atomics we
> > sometimes do fall back to spinlock based atomics on 32bit systems, even
> > if they support 32 bit atomics.
> 
> I built with -m32 on my laptop and tried "select aid, count(*) from
> pgbench_accounts group by 1 having count(*) > 1" on pgbench at scale
> factor 100 with pgbench_accounts_pkey dropped and
> max_parallel_workers_per_gather set to 10 on (a) commit
> 0b5e33f667a2042d7022da8bef31a8be5937aad1 (I know this is a little old,
> but I think it doesn't matter), (b) same plus
> shm-mq-less-spinlocks-v3, and (c) same plus shm-mq-less-spinlocks-v3
> and shm-mq-reduce-receiver-latch-set-v2.
> 
> (a) 16563.790 ms, 16625.257 ms, 16496.062 ms
> (b) 17217.051 ms, 17157.745 ms, 17225.755 ms [median to median +3.9% vs. (a)]
> (c) 15491.947 ms, 15455.840 ms, 15452.649 ms [median to median -7.0%
> vs. (a), -10.2% vs (b)]
> 
> Do you think that's a problem?  If it is, what do you think we should
> do about it?  It seems to me that it's probably OK because (1) with
> both patches we still come out ahead and (2) 32-bit systems will
> presumably continue to become rarer as time goes on, but you might
> disagree.

No, I think this is fairly reasonable. A fairly extreme usecase on a 32
bit machine regressing a bit, while gaining peformance in other case?
That works for me.


> OK, I'll try to check how feasible that would be.

cool.

Greetings,

Andres Freund


Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Tue, Feb 27, 2018 at 4:06 PM, Andres Freund <andres@anarazel.de> wrote:
>> OK, I'll try to check how feasible that would be.
>
> cool.

It's not too hard, but it doesn't really seem to help, so I'm inclined
to leave it alone.  To make it work, you need to keep two separate
counters in the shm_mq_handle, one for the number of bytes since we
did an increment and the other for the number of bytes since we sent a
signal.  I don't really want to introduce that complexity unless there
is a benefit.

With just 0001 and 0002: 3968.899 ms, 4043.428 ms, 4042.472 ms, 4142.226 ms
With two-separate-counters.patch added: 4123.841 ms, 4101.917 ms,
4063.368 ms, 3985.148 ms

If you take the total of the 4 times, that's an 0.4% slowdown with the
patch applied, but I think that's just noise.  It seems possible that
with a larger queue -- and maybe a different query shape it would
help, but I really just want to get the optimizations that I've got
committed, provided that you find them acceptable, rather than spend a
lot of time looking for new optimizations, because:

1. I've got other things to get done.

2. I think that the patches I've got here capture most of the available benefit.

3. This case isn't super-common in the first place -- we generally
want to avoid feeding tons of tuples through the Gather.

4. We might abandon the shm_mq approach entirely and switch to
something like sticking tuples in DSA using the flexible tuple slot
stuff you've proposed elsewhere.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

Attachment

Re: [HACKERS] [POC] Faster processing at Gather node

From
Robert Haas
Date:
On Wed, Feb 28, 2018 at 10:06 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> [ latest patches ]

Committed.  Thanks for the review.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: [HACKERS] [POC] Faster processing at Gather node

From
"Tels"
Date:
Hello Robert,

On Fri, March 2, 2018 12:22 pm, Robert Haas wrote:
> On Wed, Feb 28, 2018 at 10:06 AM, Robert Haas <robertmhaas@gmail.com>
> wrote:
>> [ latest patches ]
>
> Committed.  Thanks for the review.

Cool :)

There is a typo, tho:

+    /*
+     * If the counterpary is known to have attached, we can read mq_receiver
+     * without acquiring the spinlock and assume it isn't NULL.  Otherwise,
+     * more caution is needed.
+     */

s/counterpary/counterparty/;

Sorry, only noticed while re-reading the thread.

Also, either a double space is missing, or one is too many:

+    /*
+     * Separate prior reads of mq_ring from the increment of mq_bytes_read
+     * which follows.  Pairs with the full barrier in shm_mq_send_bytes(). We
+     * only need a read barrier here because the increment of mq_bytes_read is
+     * actually a read followed by a dependent write.
+     */

("  Pairs ..." vs. ". We only ...")

Best regards,

Tels


Re: [HACKERS] [POC] Faster processing at Gather node

From
Bruce Momjian
Date:
On Fri, Mar  2, 2018 at 05:21:28PM -0500, Tels wrote:
> Hello Robert,
> 
> On Fri, March 2, 2018 12:22 pm, Robert Haas wrote:
> > On Wed, Feb 28, 2018 at 10:06 AM, Robert Haas <robertmhaas@gmail.com>
> > wrote:
> >> [ latest patches ]
> >
> > Committed.  Thanks for the review.
> 
> Cool :)
> 
> There is a typo, tho:
> 
> +    /*
> +     * If the counterpary is known to have attached, we can read mq_receiver
> +     * without acquiring the spinlock and assume it isn't NULL.  Otherwise,
> +     * more caution is needed.
> +     */
> 
> s/counterpary/counterparty/;
> 
> Sorry, only noticed while re-reading the thread.
> 
> Also, either a double space is missing, or one is too many:
> 
> +    /*
> +     * Separate prior reads of mq_ring from the increment of mq_bytes_read
> +     * which follows.  Pairs with the full barrier in shm_mq_send_bytes(). We
> +     * only need a read barrier here because the increment of mq_bytes_read is
> +     * actually a read followed by a dependent write.
> +     */
> 
> ("  Pairs ..." vs. ". We only ...")
> 
> Best regards,

Change applied with the attached patch.

-- 
  Bruce Momjian  <bruce@momjian.us>        http://momjian.us
  EnterpriseDB                             http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

Attachment