Re: Revisiting default_statistics_target - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Revisiting default_statistics_target
Date
Msg-id 14622.1244391202@sss.pgh.pa.us
Whole thread Raw
In response to Re: Revisiting default_statistics_target  (Simon Riggs <simon@2ndQuadrant.com>)
Responses Re: Revisiting default_statistics_target  (Greg Stark <stark@enterprisedb.com>)
Re: Revisiting default_statistics_target  (Simon Riggs <simon@2ndQuadrant.com>)
List pgsql-hackers
Simon Riggs <simon@2ndQuadrant.com> writes:
> On Sat, 2009-06-06 at 12:06 -0700, Josh Berkus wrote:
>> Well, Jignesh and I identified two things which we think are "special" 
>> about DBT2: (1) it uses C stored procedures, and (2) we don't think it 
>> uses prepared plans.

> If there is a performance regression it is almost certain to effect
> planning; obviously if there is no planning there is no effect.

Yeah; on a benchmark that relies mainly on prepared plans, it'd be
unlikely you'd notice any effect at all, even from a very significant
increase in planning time.

My guess about the "C stored procedure" bit, if it really has any
relevance, is that it reduces the other overhead of the test case enough
that planning time becomes more significant than it would be in other
benchmark scenarios.

In any case, what we seem to have here is evidence that there are some
cases where the new default value of default_statistics_target is too
high and you can get a benefit by lowering it.  I'm not sure we should
panic about that.  Default values ought to be compromises.  If people
only ever change the default in one direction then it's probably not a
very good compromise.  We know that there are applications for which 100
is still too low, so maybe now we have got the pain spread out roughly
evenly...
        regards, tom lane


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: pg_migrator issue with contrib
Next
From: Greg Stark
Date:
Subject: Re: Revisiting default_statistics_target