Re: [DESIGN] ParallelAppend - Mailing list pgsql-hackers

From Robert Haas
Subject Re: [DESIGN] ParallelAppend
Date
Msg-id CA+TgmoaATP3p8MP+CHJ2kzp6OTUnhX3hNCV1sEyiRF-B+R1vZg@mail.gmail.com
Whole thread Raw
In response to Re: [DESIGN] ParallelAppend  (Thom Brown <thom@linux.com>)
Responses Re: [DESIGN] ParallelAppend  (Thom Brown <thom@linux.com>)
List pgsql-hackers
On Tue, Nov 17, 2015 at 4:26 AM, Thom Brown <thom@linux.com> wrote:
> Okay, I've tried this patch.

Thanks!

> Yes, it's working!

Woohoo.

> However, the first parallel seq scan shows it getting 170314 rows.
> Another run shows it getting 194165 rows.  The final result is
> correct, but as you can see from the rows on the Append node (59094295
> rows), it doesn't match the number of rows on the Gather node
> (30000000).

Is this the same issue reported in
http://www.postgresql.org/message-id/CAFj8pRBF-i=qDg9b5nZrXYfChzBEZWmthxYPhidQvwoMOjHtzg@mail.gmail.com
and not yet fixed?  I am inclined to think it probably is.

> And also, for some reason, I can no longer get this using more than 2
> workers, even with max_worker_processes = 16 and max_parallel_degree =
> 12.  I don't know if that's anything to do with this patch though.

The number of workers is limited based on the size of the largest
table involved in the Append.  That probably needs considerable
improvement, of course, but this patch is still a step forward over
not-this-patch.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Pavel Stehule
Date:
Subject: Re: proposal: multiple psql option -c
Next
From: Peter Eisentraut
Date:
Subject: Re: [PATCH] SQL function to report log message