Re: [HACKERS] why not parallel seq scan for slow functions - Mailing list pgsql-hackers

From Jeff Janes
Subject Re: [HACKERS] why not parallel seq scan for slow functions
Date
Msg-id CAMkU=1xcn1W1MSEtDtb90JhC8phRnNA2Yc4hwQk-rEqQ8rkhbQ@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] why not parallel seq scan for slow functions  (Thomas Munro <thomas.munro@enterprisedb.com>)
Responses Re: [HACKERS] why not parallel seq scan for slow functions  (Amit Kapila <amit.kapila16@gmail.com>)
List pgsql-hackers
On Tue, Sep 19, 2017 at 1:17 PM, Thomas Munro <thomas.munro@enterprisedb.com> wrote:
On Thu, Sep 14, 2017 at 3:19 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> The attached patch fixes both the review comments as discussed above.

This cost stuff looks unstable:

test select_parallel          ... FAILED

!  Gather  (cost=0.00..623882.94 rows=9976 width=8)
     Workers Planned: 4
!    ->  Parallel Seq Scan on tenk1  (cost=0.00..623882.94 rows=2494 width=8)
  (3 rows)

  drop function costly_func(var1 integer);
--- 112,120 ----
  explain select ten, costly_func(ten) from tenk1;
                                   QUERY PLAN
  ----------------------------------------------------------------------------
!  Gather  (cost=0.00..625383.00 rows=10000 width=8)
     Workers Planned: 4
!    ->  Parallel Seq Scan on tenk1  (cost=0.00..625383.00 rows=2500 width=8)
  (3 rows)

that should be fixed by turning costs on the explain, as is the tradition.


See attached.

Cheers,

Jeff
Attachment

pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: [HACKERS] Re: issue: record or row variable cannot be part of multiple-item INTO list
Next
From: Jeff Janes
Date:
Subject: Re: [HACKERS] SCRAM in the PG 10 release notes