Thread: Revisiting default_statistics_target

Revisiting default_statistics_target

From
Greg Smith
Date:
Yesterday Jignesh Shah presented his extensive benchmark results comparing 
8.4-beta1 with 8.3.7 at PGCon: 
http://blogs.sun.com/jkshah/entry/pgcon_2009_performance_comparison_of

While most cases were dead even or a modest improvement, his dbt-2 results 
suggest a 15-20% regression in 8.4.  Changing the default_statistics_taget 
to 100 was responsible for about 80% of that regression.  The remainder 
was from the constraint_exclusion change.  That 80/20 proportion was 
mentioned in the talk but not in the slides.  Putting both those back to 
the 8.3 defaults swapped things where 8.4b1 was ahead by 5% instead. 
(Note that all of the later benchmarks in his slides continued to use the 
default parameters, that change was only tested with that specific 
workload)

The situation where the stats target being so low hurts things the most 
are the data warehouse use cases.  Josh Berkus tells me that his latest DW 
testing suggests that the 10->100 increase turns out to be insufficient 
anyway; 400+ is the range you really need that to be in.  I did a quick 
survey of some other community members who work in this space and that 
experience is not unique.  Josh has some early tools that tackle this 
problem by adjusting the stats target only when it's critical--on indexed 
columns for example.  I'm going to work with him to help get those 
polished, and to see if we can replicate some of those cases via a public 
benchmark.

The bump from 10 to 100 was supported by microbenchmarks that suggested it 
would be tolerable.  That doesn't seem to be reality here though, and it's 
questionable whether this change really helps the people who need to fool 
with the value the most.  This sort of feedback is exactly why it made 
sense to try this out during the beta cycle.  But unless someone has some 
compelling evidence to the contrary, it looks like the stats target needs 
to go back to a lower value.  I think the best we can do here is to 
improve the documentation about this parameter and continue to work on 
tuning guides and tools to help people set it correctly.

As for the change to constraint_exclusion, the regression impact there is 
much less severe and the downside of getting it wrong is pretty bad. 
Rather than reverting it, the ideal response to that might be to see if 
it's possible to improve the "partition" code path.  But as I'm not going 
to volunteer to actually do that, I really don't get a vote here anyway.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD


Re: Revisiting default_statistics_target

From
"Greg Sabino Mullane"
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


> While most cases were dead even or a modest improvement, his dbt-2 results
> suggest a 15-20% regression in 8.4.  Changing the default_statistics_taget
> to 100 was responsible for about 80% of that regression.
...
> The situation where the stats target being so low hurts things the most
> are the data warehouse use cases.

Er...so why should we change our defaults to support data warehousing
users? Certainly the rest of the postgresql.conf settings don't attempt
to do that.

> The bump from 10 to 100 was supported by microbenchmarks that suggested it
> would be tolerable.

No, the 10 to 100 was supported by years of people working in the field who
routinely did that adjustment (and >100) and saw great gains. Also, as the one
who originally started the push to 100, my original goal was to get it over the
"magic 99" bump, at which the planner started acting very differently. This
caused a huge performance regression in one of the Postgres releases (don't
remember which one exactly), which severely impacted one of our large clients.

> That doesn't seem to be reality here though, and it's questionable whether
> this change really helps the people who need to fool with the value the most.

The goal of defaults is not to help people who fool with the value - it's to
get a good default out of the box for people who *don't* fool with all the
values. :)

> But unless someone has some compelling evidence to the contrary, it looks like
> the stats target needs to go back to a lower value.

Please don't. This is a very good change, and I don't see why changing it back
because it might hurt people doing DW is a good thing, when most of users are
not doing DW.

> As for the change to constraint_exclusion, the regression impact there is
> much less severe and the downside of getting it wrong is pretty bad.

Similarly, the people who are affected by something like presumably are not
running a default postgresql.conf anyway, so they can toggle it back to squeeze
a little more performance out of their system.

- --
Greg Sabino Mullane greg@turnstep.com
End Point Corporation
PGP Key: 0x14964AC8 200905221239
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAkoW1iEACgkQvJuQZxSWSsh1gACgqHBcwEd0zLsfbZJvCnXywlGp
jZ8AoNn79heFG+iLE2uh6eZ0lxRmwuHR
=/A/F
-----END PGP SIGNATURE-----



Re: Revisiting default_statistics_target

From
"Joshua D. Drake"
Date:
On Fri, 2009-05-22 at 16:43 +0000, Greg Sabino Mullane wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: RIPEMD160
> 
> 
> > While most cases were dead even or a modest improvement, his dbt-2 results
> > suggest a 15-20% regression in 8.4.  Changing the default_statistics_taget
> > to 100 was responsible for about 80% of that regression.
> ...
> > The situation where the stats target being so low hurts things the most
> > are the data warehouse use cases.

Nor is it our primary user base. If we want to do this we need to have
more than one conf as a tmpl.


> > That doesn't seem to be reality here though, and it's questionable whether
> > this change really helps the people who need to fool with the value the most.
> 
> The goal of defaults is not to help people who fool with the value - it's to
> get a good default out of the box for people who *don't* fool with all the
> values. :)

Right. If someone is really doing a DW they are already spending time
with the postgresql.conf.


> > But unless someone has some compelling evidence to the contrary, it looks like
> > the stats target needs to go back to a lower value.
> 
> Please don't. This is a very good change, and I don't see why changing it back
> because it might hurt people doing DW is a good thing, when most of users are
> not doing DW.

I haven't seen any evidence to suggest that Jignesh's findings provide
anything but a single data point in a vast metric of our smallest user
base. Reverting a value based on that seems like a mistake.

> 
> > As for the change to constraint_exclusion, the regression impact there is
> > much less severe and the downside of getting it wrong is pretty bad.
> 
> Similarly, the people who are affected by something like presumably are not
> running a default postgresql.conf anyway, so they can toggle it back to squeeze
> a little more performance out of their system.
> 

Right.

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdrake@jabber.postgresql.org  Consulting, Development, Support, Training  503-667-4564 -
http://www.commandprompt.com/ The PostgreSQL Company, serving since 1997
 



Re: Revisiting default_statistics_target

From
Stephen Frost
Date:
Greg,

* Greg Sabino Mullane (greg@turnstep.com) wrote:
> > While most cases were dead even or a modest improvement, his dbt-2 results
> > suggest a 15-20% regression in 8.4.  Changing the default_statistics_taget
> > to 100 was responsible for about 80% of that regression.
> ...
> > The situation where the stats target being so low hurts things the most
> > are the data warehouse use cases.
>
> Er...so why should we change our defaults to support data warehousing
> users? Certainly the rest of the postgresql.conf settings don't attempt
> to do that.

dbt-2 is for OLTP, not for DW.  Greg Smith's comment was actually that
we shouldn't penalize the OLTP crowd (by raising the value) for the
benefit of the DW crowd (who need it higher than 100 anyway).

http://osdldbt.sourceforge.net/
Stephen

Re: Revisiting default_statistics_target

From
Tom Lane
Date:
Greg Smith <gsmith@gregsmith.com> writes:
> Yesterday Jignesh Shah presented his extensive benchmark results comparing 
> 8.4-beta1 with 8.3.7 at PGCon: 
> http://blogs.sun.com/jkshah/entry/pgcon_2009_performance_comparison_of

> While most cases were dead even or a modest improvement, his dbt-2 results 
> suggest a 15-20% regression in 8.4.  Changing the default_statistics_taget 
> to 100 was responsible for about 80% of that regression.  The remainder 
> was from the constraint_exclusion change.  That 80/20 proportion was 
> mentioned in the talk but not in the slides.  Putting both those back to 
> the 8.3 defaults swapped things where 8.4b1 was ahead by 5% instead. 

Yeah, I saw that talk and I'm concerned too, but I think it's premature
to conclude that the problem is precisely that stats_target is now too
high.  I'd like to see Jignesh check through the individual queries in
the test and make sure that none of them had plans that changed for the
worse.  The stats change might have just coincidentally tickled some
other planning issue.

Assuming that we do confirm that the hit comes from extra stats
processing time during planning, I'd still not want to go all the way
back to 10 as default.  Perhaps we'd end up compromising at something
like 50.
        regards, tom lane


Re: Revisiting default_statistics_target

From
"Joshua D. Drake"
Date:
On Fri, 2009-05-22 at 13:35 -0400, Stephen Frost wrote:
> Greg,

> dbt-2 is for OLTP, not for DW.  Greg Smith's comment was actually that
> we shouldn't penalize the OLTP crowd (by raising the value) for the
> benefit of the DW crowd (who need it higher than 100 anyway).
> 

I appear to have completely missed the top part of Greg Smith's original
email. Sorry about that Greg.

We probably need to test this to get some more data points. Of course
that is why we have the performance lab :)

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdrake@jabber.postgresql.org  Consulting, Development, Support, Training  503-667-4564 -
http://www.commandprompt.com/ The PostgreSQL Company, serving since 1997
 



Re: Revisiting default_statistics_target

From
"Jignesh K. Shah"
Date:

Greg Sabino Mullane wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: RIPEMD160
>
>
>   
>> While most cases were dead even or a modest improvement, his dbt-2 results
>> suggest a 15-20% regression in 8.4.  Changing the default_statistics_taget
>> to 100 was responsible for about 80% of that regression.
>>     
> ...
>   
>> The situation where the stats target being so low hurts things the most
>> are the data warehouse use cases.
>>     
>
> Er...so why should we change our defaults to support data warehousing
> users? Certainly the rest of the postgresql.conf settings don't attempt
> to do that.
>
>   

The test that was greatly impacted is DBT-2 (OLTP Benchmark) with 8.4 
defaults but seems to work fine/better when reverted to use 8.3 
defaults. The 8.4 defaults seemed to improve DBT-3 (Data Warehousing) 
though I haven't retested them with 8.3 defaults in 8.4.   Of course I 
am not a big fan of DBT-2 myself and I am just providing datapoints of 
what I observed during my testing with various workloads. I certainly 
don't claim to understand what is happening (yet).

-Jignesh

>> The bump from 10 to 100 was supported by microbenchmarks that suggested it
>> would be tolerable.
>>     
>
> No, the 10 to 100 was supported by years of people working in the field who
> routinely did that adjustment (and >100) and saw great gains. Also, as the one
> who originally started the push to 100, my original goal was to get it over the
> "magic 99" bump, at which the planner started acting very differently. This
> caused a huge performance regression in one of the Postgres releases (don't
> remember which one exactly), which severely impacted one of our large clients.
>
>   
>> That doesn't seem to be reality here though, and it's questionable whether
>> this change really helps the people who need to fool with the value the most.
>>     
>
> The goal of defaults is not to help people who fool with the value - it's to
> get a good default out of the box for people who *don't* fool with all the
> values. :)
>
>   
>> But unless someone has some compelling evidence to the contrary, it looks like
>> the stats target needs to go back to a lower value.
>>     
>
> Please don't. This is a very good change, and I don't see why changing it back
> because it might hurt people doing DW is a good thing, when most of users are
> not doing DW.
>
>   
>> As for the change to constraint_exclusion, the regression impact there is
>> much less severe and the downside of getting it wrong is pretty bad.
>>     
>
> Similarly, the people who are affected by something like presumably are not
> running a default postgresql.conf anyway, so they can toggle it back to squeeze
> a little more performance out of their system.
>
> - --
> Greg Sabino Mullane greg@turnstep.com
> End Point Corporation
> PGP Key: 0x14964AC8 200905221239
> http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
> -----BEGIN PGP SIGNATURE-----
>
> iEYEAREDAAYFAkoW1iEACgkQvJuQZxSWSsh1gACgqHBcwEd0zLsfbZJvCnXywlGp
> jZ8AoNn79heFG+iLE2uh6eZ0lxRmwuHR
> =/A/F
> -----END PGP SIGNATURE-----
>
>
>   

-- 
Jignesh Shah           http://blogs.sun.com/jkshah              
The New Sun Microsystems,Inc   http://sun.com/postgresql



Re: Revisiting default_statistics_target

From
andrew@dunslane.net
Date:
> Greg Smith <gsmith@gregsmith.com> writes:
>> Yesterday Jignesh Shah presented his extensive benchmark results
>> comparing
>> 8.4-beta1 with 8.3.7 at PGCon:
>> http://blogs.sun.com/jkshah/entry/pgcon_2009_performance_comparison_of
>
>> While most cases were dead even or a modest improvement, his dbt-2
>> results
>> suggest a 15-20% regression in 8.4.  Changing the
>> default_statistics_taget
>> to 100 was responsible for about 80% of that regression.  The remainder
>> was from the constraint_exclusion change.  That 80/20 proportion was
>> mentioned in the talk but not in the slides.  Putting both those back to
>> the 8.3 defaults swapped things where 8.4b1 was ahead by 5% instead.
>
> Yeah, I saw that talk and I'm concerned too, but I think it's premature
> to conclude that the problem is precisely that stats_target is now too
> high.  I'd like to see Jignesh check through the individual queries in
> the test and make sure that none of them had plans that changed for the
> worse.  The stats change might have just coincidentally tickled some
> other planning issue.


Wouldn't he just need to rerun the tests with default_stats_target set to
the old value? I presume he has actually done this already in order to
come to the conclusion he did about the cause of the regression.

cheers

andrew




Re: Revisiting default_statistics_target

From
Tom Lane
Date:
andrew@dunslane.net writes:
> Wouldn't he just need to rerun the tests with default_stats_target set to
> the old value? I presume he has actually done this already in order to
> come to the conclusion he did about the cause of the regression.

Yeah, he did, so we know it's slower that way.  But exactly *why* it's
slower is not proven.  It could be an artifact rather than something
we really ought to react to.
        regards, tom lane


Re: Revisiting default_statistics_target

From
Tom Lane
Date:
"Joshua D. Drake" <jd@commandprompt.com> writes:
> We probably need to test this to get some more data points.

Agreed --- DBT2 is just one data point.  We shouldn't assume that it's
definitive.
        regards, tom lane


Re: Revisiting default_statistics_target

From
Tom Lane
Date:
"Greg Sabino Mullane" <greg@turnstep.com> writes:
> No, the 10 to 100 was supported by years of people working in the
> field who routinely did that adjustment (and >100) and saw great
> gains. Also, as the one who originally started the push to 100, my
> original goal was to get it over the "magic 99" bump, at which the
> planner started acting very differently.

That particular issue is gone anyway.

I'm not in a big hurry to revert this change either, but I think
Jignesh's results are sufficient reason to take a closer look at
the decision.
        regards, tom lane


Re: Revisiting default_statistics_target

From
"Andrew Dunstan"
Date:
On Fri, May 22, 2009 2:41 pm, Tom Lane wrote:
> "Greg Sabino Mullane" <greg@turnstep.com> writes:
>> No, the 10 to 100 was supported by years of people working in the
>> field who routinely did that adjustment (and >100) and saw great
>> gains. Also, as the one who originally started the push to 100, my
>> original goal was to get it over the "magic 99" bump, at which the
>> planner started acting very differently.
>
> That particular issue is gone anyway.
>
> I'm not in a big hurry to revert this change either, but I think
> Jignesh's results are sufficient reason to take a closer look at
> the decision.
>


We also need more data points just about this test. Does the behaviour
hold for other platforms, and what is the relationship between stats
target and timings (is it linear or is there a sudden jump at some level)?

cheers

andrew



Re: Revisiting default_statistics_target

From
Josh Berkus
Date:
On 5/22/09 2:36 PM, Tom Lane wrote:
> andrew@dunslane.net writes:
>> Wouldn't he just need to rerun the tests with default_stats_target set to
>> the old value? I presume he has actually done this already in order to
>> come to the conclusion he did about the cause of the regression.
>
> Yeah, he did, so we know it's slower that way.  But exactly *why* it's
> slower is not proven.  It could be an artifact rather than something
> we really ought to react to.

It appears (right now) to be an artifact.

The drop in performance happens with queries which are called using C 
stored procedures exclusively.  It doesn't show up on other benchmarks 
which call similar queries directly.

Jignesh and I will be testing some stuff next week to get a better idea 
of what exactly makes the drop happen, but for not this appears to be a 
corner case.

-- 
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com


Re: Revisiting default_statistics_target

From
Greg Smith
Date:
On Fri, 22 May 2009, Greg Sabino Mullane wrote:

>> The bump from 10 to 100 was supported by microbenchmarks that suggested it
>> would be tolerable.
>
> No, the 10 to 100 was supported by years of people working in the field who
> routinely did that adjustment (and >100) and saw great gains.

No one is suggesting the increase isn't important to people running many 
common workloads.  The question the smaller benchmarks tried to answer is 
whether it was likely to detune anything else as a penalty for improving 
that situation.  The comments you made here can get turned right around at 
you:  if increasing the value in the field is sufficient to help out those 
that need it, why should the project at large accept any significant 
penalty that could apply to everyone just to help that subset?

Would you be happy with 8.4 going out the door if there really turns out 
to be a 15% penalty for other use cases by this change?  That's a PR 
nightmare waiting to happen, and the main reason I wanted to bring this up 
here with some additional details as soon as Jignesh's slides went 
public--so everyone here is aware of what's going on before this bit of 
news gets picked up anywhere else.

Hopefully whatever is happening to dbt2 will turn out to be a quirk not 
worth worrying about.  What if it turns out to be repeatable and expected 
to impact people in the field though?  I hope you'd recognize that your 
use case is no more privileged to trump other people's than the changes 
that would be good for DW users, but not anyone else, that you were just 
making critical comments about.

Anyway, thanks to Stephen for concisely clarifying the position I was 
trying to present here, which is quite different from the one you were 
arguing against.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD


Re: Revisiting default_statistics_target

From
Greg Stark
Date:

On 22 May 2009, at 16:17, Greg Smith <gsmith@gregsmith.com> wrote:

> On Fri, 22 May 2009, Greg Sabino Mullane wrote:
>
>>> The bump from 10 to 100 was supported by microbenchmarks that  
>>> suggested it
>>> would be tolerable.
>>
>> No, the 10 to 100 was supported by years of people working in the  
>> field who
>> routinely did that adjustment (and >100) and saw great gains.
>
> No one is suggesting the increase isn't important to people running  
> many common workloads.  The question the smaller benchmarks tried to  
> answer is whether it was likely to detune anything else as a penalty  
> for improving that situation.

As Tom implied there are two possible problems and the response would  
be different depending on what's going on.

If the plan changed due to the added stats and the new plan is worse  
then it's a problem but the lowering the stats target would just be  
papering over the problem. Ideally having better stats should never  
cause a worse plan.

If on the other hand it turns out that planning the queries is taking  
15% longer the, well then we should rerun the microbenchmark using  
these queries might give us a better default.

Or maybe we would just find the bottlenecks and fix them ...


> The comments you made here can get turned right around at you:  if  
> increasing the value in the field is sufficient to help out those  
> that need it, why should the project at large accept any significant  
> penalty that could apply to everyone just to help that subset?
>
> Would you be happy with 8.4 going out the door if there really turns  
> out to be a 15% penalty for other use cases by this change?  That's  
> a PR nightmare waiting to happen, and the main reason I wanted to  
> bring this up here with some additional details as soon as Jignesh's  
> slides went public--so everyone here is aware of what's going on  
> before this bit of news gets picked up anywhere else.
>
> Hopefully whatever is happening to dbt2 will turn out to be a quirk  
> not worth worrying about.  What if it turns out to be repeatable and  
> expected to impact people in the field though?  I hope you'd  
> recognize that your use case is no more privileged to trump other  
> people's than the changes that would be good for DW users, but not  
> anyone else, that you were just making critical comments about.
>
> Anyway, thanks to Stephen for concisely clarifying the position I  
> was trying to present here, which is quite different from the one  
> you were arguing against.
>
> --
> * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com  
> Baltimore, MD
>
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers


Re: Revisiting default_statistics_target

From
Mark Wong
Date:
On Fri, May 22, 2009 at 10:38 AM, Tom Lane<tgl@sss.pgh.pa.us> wrote:
> Greg Smith <gsmith@gregsmith.com> writes:
>> Yesterday Jignesh Shah presented his extensive benchmark results comparing
>> 8.4-beta1 with 8.3.7 at PGCon:
>> http://blogs.sun.com/jkshah/entry/pgcon_2009_performance_comparison_of
>
>> While most cases were dead even or a modest improvement, his dbt-2 results
>> suggest a 15-20% regression in 8.4.  Changing the default_statistics_taget
>> to 100 was responsible for about 80% of that regression.  The remainder
>> was from the constraint_exclusion change.  That 80/20 proportion was
>> mentioned in the talk but not in the slides.  Putting both those back to
>> the 8.3 defaults swapped things where 8.4b1 was ahead by 5% instead.
>
> Yeah, I saw that talk and I'm concerned too, but I think it's premature
> to conclude that the problem is precisely that stats_target is now too
> high.  I'd like to see Jignesh check through the individual queries in
> the test and make sure that none of them had plans that changed for the
> worse.  The stats change might have just coincidentally tickled some
> other planning issue.

I did a couple runs to see the effects on our Portland perf lab DL380 G5 system:

The scripts in the kit capture EXPLAIN (no ANALYZE) before and after
the test and running a diff doesn't show any changes.  Someone will
have to run the queries by hand to get the EXPLAIN ANALYZE data:

New 8.4 Defaults (constraint_exclusion=partition,
default_statistics_target=100):

report:

http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.j.2/report/

Plans:
http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.j.2/db/plan0.out
http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.j.2/db/plan1.out


With constraint_exclusion=off, default_statistics_target=10:

http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.j.1/report/

Plans:
http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.j.1/db/plan0.out
http://207.173.203.223/~markwkm/community6/dbt2/m1500-8.4beta2/m1500.8.4beta2.j.1/db/plan1.out


On the DL380 GB system, where I'm using a lot more drives the Jignesh,
I see a performance change of under 5%.  15651.14 notpm vs 16333.32
notpm.  And this is after a bit of tuning, not sure how much the out
of the box experience changes on this system.

Now if only I couldn't figure out why oprofile doesn't like this system...

Regards.
Mark Wong


Re: Revisiting default_statistics_target

From
Josh Berkus
Date:
Mark,

> On the DL380 GB system, where I'm using a lot more drives the Jignesh,
> I see a performance change of under 5%.  15651.14 notpm vs 16333.32
> notpm.  And this is after a bit of tuning, not sure how much the out
> of the box experience changes on this system.

Well, Jignesh and I identified two things which we think are "special" 
about DBT2: (1) it uses C stored procedures, and (2) we don't think it 
uses prepared plans.

I've been unable to reproduce any performance drop using pgbench.

-- 
Josh Berkus
PostgreSQL Experts Inc.
www.pgexperts.com


Re: Revisiting default_statistics_target

From
Simon Riggs
Date:
On Sat, 2009-06-06 at 12:06 -0700, Josh Berkus wrote:

> > On the DL380 GB system, where I'm using a lot more drives the Jignesh,
> > I see a performance change of under 5%.  15651.14 notpm vs 16333.32
> > notpm.  And this is after a bit of tuning, not sure how much the out
> > of the box experience changes on this system.
> 
> Well, Jignesh and I identified two things which we think are "special" 
> about DBT2: (1) it uses C stored procedures, and (2) we don't think it 
> uses prepared plans.

If there is a performance regression it is almost certain to effect
planning; obviously if there is no planning there is no effect. But not
everybody can or wants to use prepared plans for a variety of reasons.

> I've been unable to reproduce any performance drop using pgbench.

I think we aren't likely to measure the effects accurately, since we are
unable to measure planning times with any sensible level of accuracy.
Increased planning times may not directly translate into performance
drops on many tests, though can still represent a problem for many
people.

If we can specify an accurate test mechanism, we may get some reasonable
information for decision making.

-- Simon Riggs           www.2ndQuadrant.comPostgreSQL Training, Services and Support



Re: Revisiting default_statistics_target

From
Tom Lane
Date:
Simon Riggs <simon@2ndQuadrant.com> writes:
> On Sat, 2009-06-06 at 12:06 -0700, Josh Berkus wrote:
>> Well, Jignesh and I identified two things which we think are "special" 
>> about DBT2: (1) it uses C stored procedures, and (2) we don't think it 
>> uses prepared plans.

> If there is a performance regression it is almost certain to effect
> planning; obviously if there is no planning there is no effect.

Yeah; on a benchmark that relies mainly on prepared plans, it'd be
unlikely you'd notice any effect at all, even from a very significant
increase in planning time.

My guess about the "C stored procedure" bit, if it really has any
relevance, is that it reduces the other overhead of the test case enough
that planning time becomes more significant than it would be in other
benchmark scenarios.

In any case, what we seem to have here is evidence that there are some
cases where the new default value of default_statistics_target is too
high and you can get a benefit by lowering it.  I'm not sure we should
panic about that.  Default values ought to be compromises.  If people
only ever change the default in one direction then it's probably not a
very good compromise.  We know that there are applications for which 100
is still too low, so maybe now we have got the pain spread out roughly
evenly...
        regards, tom lane


Re: Revisiting default_statistics_target

From
Greg Stark
Date:
On Sun, Jun 7, 2009 at 5:13 PM, Tom Lane<tgl@sss.pgh.pa.us> wrote:
>
> In any case, what we seem to have here is evidence that there are some
> cases where the new default value of default_statistics_target is too
> high and you can get a benefit by lowering it.  I'm not sure we should
> panic about that.  Default values ought to be compromises.  If people
> only ever change the default in one direction then it's probably not a
> very good compromise.  We know that there are applications for which 100
> is still too low, so maybe now we have got the pain spread out roughly
> evenly...

I would be nice to get oprofile working and see results for various
target sizes.

I've had trouble with oprofile myself. I think some machines have
bogus rtc hardware or something related that prevents it from working
properly.

--
greg
http://mit.edu/~gsstark/resume.pdf


Re: Revisiting default_statistics_target

From
Simon Riggs
Date:
On Sun, 2009-06-07 at 12:13 -0400, Tom Lane wrote:

> In any case, what we seem to have here is evidence that there are some
> cases where the new default value of default_statistics_target is too
> high and you can get a benefit by lowering it.  I'm not sure we should
> panic about that.  Default values ought to be compromises.  If people
> only ever change the default in one direction then it's probably not a
> very good compromise.  We know that there are applications for which
> 100 is still too low, so maybe now we have got the pain spread out
> roughly evenly...

I'm certainly happy with 100 as the default.

-- Simon Riggs           www.2ndQuadrant.comPostgreSQL Training, Services and Support