Re: Performance problem in PLPgSQL - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Performance problem in PLPgSQL
Date
Msg-id 23431.1377294900@sss.pgh.pa.us
Whole thread Raw
In response to Re: Performance problem in PLPgSQL  (Pavel Stehule <pavel.stehule@gmail.com>)
Responses Re: Performance problem in PLPgSQL  (Marc Cousin <cousinmarc@gmail.com>)
Re: Performance problem in PLPgSQL  (Pavel Stehule <pavel.stehule@gmail.com>)
List pgsql-hackers
Pavel Stehule <pavel.stehule@gmail.com> writes:
> please, can you send a self explained test
> this issue should be fixed, and we need a examples.

We already had a perfectly good example at the beginning of this thread.
What's missing is a decision on how we ought to approximate the cost of
planning (relative to execution costs).

As I mentioned upthread, it doesn't seem unreasonable to me to do
something quick-and-dirty based on the length of the plan's rangetable.
Pretty nearly anything would fix these specific situations where the
estimated execution cost is negligible.  It's possible that there are
more complicated cases where we'll need a more accurate estimate, but
we've not seen an example of that yet.

My previous suggestion was to estimate planning cost as  10 * (length(plan->rangetable) + 1)
but on reflection it ought to be scaled by one of the cpu cost constants,
so perhaps  1000 * cpu_operator_cost * (length(plan->rangetable) + 1)
which'd mean a custom plan has to be estimated to save a minimum of
about 5 cost units (more if more than 1 table is used) before it'll
be chosen.  I'm tempted to make the multiplier be 10000 not 1000,
but it seems better to be conservative about changing the behavior
until we see how well this works in practice.

Objections, better ideas?
        regards, tom lane



pgsql-hackers by date:

Previous
From: Josh Berkus
Date:
Subject: Re: Redesigning checkpoint_segments
Next
From: Tom Lane
Date:
Subject: Re: PL/pgSQL PERFORM with CTE