Re: Parallel append plan instability/randomness - Mailing list pgsql-hackers

From Amit Kapila
Subject Re: Parallel append plan instability/randomness
Date
Msg-id CAA4eK1JHbp2uEvmBVv7JRdCFznG9zWVy6aBB0-8arxZm4P-4hg@mail.gmail.com
Whole thread Raw
In response to Re: Parallel append plan instability/randomness  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Parallel append plan instability/randomness
List pgsql-hackers
On Mon, Jan 8, 2018 at 11:26 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Amit Khandekar <amitdkhan.pg@gmail.com> writes:
>> The fact that b_star gets moved from 5th position to  the first
>> position in the scans, indicates that it's cost shoots up from 1.04 to
>> a value greater than 1.16. It does not look like a case where two
>> costs are almost same due to which their positions swap sometimes. I
>> am trying to figure out what else can it be ...
>

That occurred to me as well, but still, the change in plan can happen
due to the similar costs.  Another possibility as indicated in the
previous email is that if somehow the stats of table (reltuples,
relpages) is not appropriate, say due to some reason analyze doesn't
happen on the table.  For example, if you just manually update the
value of reltuples for b_star in pg_class to 20 or so, you will see
the plan as indicated in the failure.  If that is true, then probably
doing Analyze before Parallel Append should do the trick.

> The gut feeling I had upon seeing the failure was that the plan shape
> depends on the order in which rows happen to be read from the system
> catalogs by a heapscan.  I've not tried to run that idea to ground yet.
>

I don't see how something like that can happen because we internally
sort the subpaths for parallel append.


-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: heads up: Fix for intel hardware bug will lead to performanceregressions
Next
From: Amit Khandekar
Date:
Subject: Re: Parallel append plan instability/randomness