Thread: Re: [PATCHES] Better default_statistics_target

Re: [PATCHES] Better default_statistics_target

From
"Greg Sabino Mullane"
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


Simon spoke:
> The choice of 100 is because of the way the LIKE estimator is
> configured. Greg is not suggesting he measured it and found 100 to be
> best, he is saying that the LIKE operator is hard-coded at 100 and so
> the stats_target should reflect that.

Exactly.

> Setting it to 100 for all columns because of LIKE doesn't make much
> sense. I think we should set stats target differently depending upon the
> data type, but thats probably an 8.4 thing. Long text fields that might
> use LIKE should be set to 100. CHAR(1) and general fields should be set
> to 10.

Agreed, this would be a nice 8.4 thing. But what about 8.3 and 8.2? Is
there a reason not to make this change? I know I've been lazy and not run
any absolute figures, but rough tests show that raising it (from 10 to
100) results in a very minor increase in analyze time, even for large
databases. I think the burden of a slightly slower analyze time, which
can be easily adjusted, both in postgresql.conf and right before running
an analyze, is very small compared to the pain of some queries - which worked
before - suddenly running much, much slower for no apparent reason at all.
Sure, 100 may have been chosen somewhat arbitrarily for the LIKE thing,
but this is a current real-world performance regression (aka a bug,
according to a nearby thread). Almost everyone agrees that 10 is too low,
so why not make it 100, throw a big warning in the release notes, and
then start some serious re-evaluation for 8.4?


- --
Greg Sabino Mullane greg@turnstep.com
End Point Corporation
PGP Key: 0x14964AC8 200712050920
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8


-----BEGIN PGP SIGNATURE-----

iD8DBQFHVrSivJuQZxSWSsgRAyDNAKCInH9SJRO8ly1L1MomJUPlBslBlgCeLQ1v
+w4ZumRcB5U5L3SGT0rk4AE=
=I8Ur
-----END PGP SIGNATURE-----



Re: [PATCHES] Better default_statistics_target

From
"Guillaume Smet"
Date:
On Dec 5, 2007 3:26 PM, Greg Sabino Mullane <greg@turnstep.com> wrote:
> Agreed, this would be a nice 8.4 thing. But what about 8.3 and 8.2? Is
> there a reason not to make this change? I know I've been lazy and not run
> any absolute figures, but rough tests show that raising it (from 10 to
> 100) results in a very minor increase in analyze time, even for large
> databases. I think the burden of a slightly slower analyze time, which
> can be easily adjusted, both in postgresql.conf and right before running
> an analyze, is very small compared to the pain of some queries - which worked
> before - suddenly running much, much slower for no apparent reason at all.

As Tom stated it earlier, the ANALYZE slow down is far from being the
only consequence. The planner will also have more work to do and
that's the hard point IMHO.

Without studying the impacts of this change on a large set of queries
in different cases, it's quite hard to know for sure that it won't
have a negative impact in a lot of cases.

It's a bit too late in the cycle to change that IMHO, especially
without any numbers.

--
Guillaume

Re: [PATCHES] Better default_statistics_target

From
Decibel!
Date:
On Wed, Dec 05, 2007 at 06:49:00PM +0100, Guillaume Smet wrote:
> On Dec 5, 2007 3:26 PM, Greg Sabino Mullane <greg@turnstep.com> wrote:
> > Agreed, this would be a nice 8.4 thing. But what about 8.3 and 8.2? Is
> > there a reason not to make this change? I know I've been lazy and not run
> > any absolute figures, but rough tests show that raising it (from 10 to
> > 100) results in a very minor increase in analyze time, even for large
> > databases. I think the burden of a slightly slower analyze time, which
> > can be easily adjusted, both in postgresql.conf and right before running
> > an analyze, is very small compared to the pain of some queries - which worked
> > before - suddenly running much, much slower for no apparent reason at all.
>
> As Tom stated it earlier, the ANALYZE slow down is far from being the
> only consequence. The planner will also have more work to do and
> that's the hard point IMHO.

How much more? Doesn't it now use a binary search? If so, ISTM that
going from 10 to 100 would at worst double the time spent finding the
bucket we need. Considering that we're talking something that takes
microseconds, and that there's a huge penalty to be paid if you have bad
stats estimates, that doesn't seem that big a deal. And on modern
machines it's not like the additional space in the catalogs is going to
kill us.

FWIW, I've never seen anything but a performance increase or no change
when going from 10 to 100. In most cases there's a noticeable
improvement since it's common to have over 100k rows in a table, and
there's just no way to capture any kind of a real picture of that with
only 10 buckets.
--
Decibel!, aka Jim C. Nasby, Database Architect  decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Re: [PATCHES] Better default_statistics_target

From
"Christopher Browne"
Date:
On Dec 6, 2007 6:28 PM, Decibel! <decibel@decibel.org> wrote:
> FWIW, I've never seen anything but a performance increase or no change
> when going from 10 to 100. In most cases there's a noticeable
> improvement since it's common to have over 100k rows in a table, and
> there's just no way to capture any kind of a real picture of that with
> only 10 buckets.

I'd be more inclined to try to do something that was at least somewhat
data aware.

The "interesting theory" that I'd like to verify if I had a chance
would be to run through a by-column tuning using a set of heuristics.
My "first order approximation" would be:

- If a column defines a unique key, then we know there will be no
clustering of values, so no need to increase the count...

- If a column contains a datestamp, then the distribution of values is
likely to be temporal, so no need to increase the count...

- If a column has a highly constricted set of values (e.g. - boolean),
then we might *decrease* the count.

- We might run a query that runs across the table, looking at
frequencies of values, and if it finds a lot of repeated values, we'd
increase the count.

That's a bit "hand-wavy," but that could lead to both increases and
decreases in the histogram sizes.  Given that, we can expect the
overall stat sizes to not forcibly need to grow *enormously*, because
we can hope for there to be cases of shrinkage.


-- 
http://linuxfinances.info/info/linuxdistributions.html
"The definition of insanity is doing the same thing over and over and
expecting different results."  -- assortedly attributed to Albert
Einstein, Benjamin Franklin, Rita Mae Brown, and Rudyard Kipling


Re: [PATCHES] Better default_statistics_target

From
Decibel!
Date:
On Mon, Jan 28, 2008 at 11:14:05PM +0000, Christopher Browne wrote:
> On Dec 6, 2007 6:28 PM, Decibel! <decibel@decibel.org> wrote:
> > FWIW, I've never seen anything but a performance increase or no change
> > when going from 10 to 100. In most cases there's a noticeable
> > improvement since it's common to have over 100k rows in a table, and
> > there's just no way to capture any kind of a real picture of that with
> > only 10 buckets.
>
> I'd be more inclined to try to do something that was at least somewhat
> data aware.
>
> The "interesting theory" that I'd like to verify if I had a chance
> would be to run through a by-column tuning using a set of heuristics.
> My "first order approximation" would be:
>
> - If a column defines a unique key, then we know there will be no
> clustering of values, so no need to increase the count...
>
> - If a column contains a datestamp, then the distribution of values is
> likely to be temporal, so no need to increase the count...
>
> - If a column has a highly constricted set of values (e.g. - boolean),
> then we might *decrease* the count.
>
> - We might run a query that runs across the table, looking at
> frequencies of values, and if it finds a lot of repeated values, we'd
> increase the count.
>
> That's a bit "hand-wavy," but that could lead to both increases and
> decreases in the histogram sizes.  Given that, we can expect the
> overall stat sizes to not forcibly need to grow *enormously*, because
> we can hope for there to be cases of shrinkage.

I think that before doing any of that you'd be much better off
investigating how much performance penalty there is for maxing out
default_statistict_target. If, as I suspect, it's essentially 0 on
modern hardware, then I don't think it's worth any more effort.

BTW, that investigation wouldn't just be academic either; if we could
convince ourselves that there normally wasn't any cost associated with a
high default_statistics_target, we could increase the default, which
would reduce the amount of traffic we'd see on -performance about bad
query plans.
--
Decibel!, aka Jim C. Nasby, Database Architect  decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Re: [PATCHES] Better default_statistics_target

From
Gregory Stark
Date:
"Decibel!" <decibel@decibel.org> writes:

> I think that before doing any of that you'd be much better off
> investigating how much performance penalty there is for maxing out
> default_statistict_target. If, as I suspect, it's essentially 0 on
> modern hardware, then I don't think it's worth any more effort.

That's not my experience. Even just raising it to 100 multiplies the number of
rows ANALYZE has to read by 10. And the arrays for every column become ten
times larger. Eventually they start being toasted...

> BTW, that investigation wouldn't just be academic either; if we could
> convince ourselves that there normally wasn't any cost associated with a
> high default_statistics_target, we could increase the default, which
> would reduce the amount of traffic we'd see on -performance about bad
> query plans.

I suspect we could raise it, we just don't know by how much.

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support!


Re: [PATCHES] Better default_statistics_target

From
"Guillaume Smet"
Date:
On Jan 31, 2008 12:08 AM, Gregory Stark <stark@enterprisedb.com> wrote:
> "Decibel!" <decibel@decibel.org> writes:
>
> > I think that before doing any of that you'd be much better off
> > investigating how much performance penalty there is for maxing out
> > default_statistict_target. If, as I suspect, it's essentially 0 on
> > modern hardware, then I don't think it's worth any more effort.
>
> That's not my experience. Even just raising it to 100 multiplies the number of
> rows ANALYZE has to read by 10. And the arrays for every column become ten
> times larger. Eventually they start being toasted...

+1. From the tests I did on our new server, I set the
default_statistict_target to 30. Those tests were mainly based on the
ANALYZE time though, not the planner overhead introduced by larger
statistics - with higher values, I considered the ANALYZE time too
high for the benefits. I set it higher on a per column basis only if I
see it can lead to better stats but from all the tests I did so far,
it was sufficient for our data set.

--
Guillaume


Re: [PATCHES] Better default_statistics_target

From
Tom Lane
Date:
"Guillaume Smet" <guillaume.smet@gmail.com> writes:
> On Jan 31, 2008 12:08 AM, Gregory Stark <stark@enterprisedb.com> wrote:
>> That's not my experience. Even just raising it to 100 multiplies the number of
>> rows ANALYZE has to read by 10. And the arrays for every column become ten
>> times larger. Eventually they start being toasted...

> +1. From the tests I did on our new server, I set the
> default_statistict_target to 30. Those tests were mainly based on the
> ANALYZE time though, not the planner overhead introduced by larger
> statistics - with higher values, I considered the ANALYZE time too
> high for the benefits.

eqjoinsel(), for one, is O(N^2) in the number of MCV values kept.
Possibly this could be improved, but in general I'd be real wary
of pushing the default to the moon without some explicit testing of
the impact on planning time.
        regards, tom lane


Re: [PATCHES] Better default_statistics_target

From
"Christopher Browne"
Date:
On Jan 30, 2008 5:58 PM, Decibel! <decibel@decibel.org> wrote:
>
> On Mon, Jan 28, 2008 at 11:14:05PM +0000, Christopher Browne wrote:
> > On Dec 6, 2007 6:28 PM, Decibel! <decibel@decibel.org> wrote:
> > > FWIW, I've never seen anything but a performance increase or no change
> > > when going from 10 to 100. In most cases there's a noticeable
> > > improvement since it's common to have over 100k rows in a table, and
> > > there's just no way to capture any kind of a real picture of that with
> > > only 10 buckets.
> >
> > I'd be more inclined to try to do something that was at least somewhat
> > data aware.
> >
> > The "interesting theory" that I'd like to verify if I had a chance
> > would be to run through a by-column tuning using a set of heuristics.
> > My "first order approximation" would be:
> >
> > - If a column defines a unique key, then we know there will be no
> > clustering of values, so no need to increase the count...
> >
> > - If a column contains a datestamp, then the distribution of values is
> > likely to be temporal, so no need to increase the count...
> >
> > - If a column has a highly constricted set of values (e.g. - boolean),
> > then we might *decrease* the count.
> >
> > - We might run a query that runs across the table, looking at
> > frequencies of values, and if it finds a lot of repeated values, we'd
> > increase the count.
> >
> > That's a bit "hand-wavy," but that could lead to both increases and
> > decreases in the histogram sizes.  Given that, we can expect the
> > overall stat sizes to not forcibly need to grow *enormously*, because
> > we can hope for there to be cases of shrinkage.
>
> I think that before doing any of that you'd be much better off
> investigating how much performance penalty there is for maxing out
> default_statistict_target. If, as I suspect, it's essentially 0 on
> modern hardware, then I don't think it's worth any more effort.
>
> BTW, that investigation wouldn't just be academic either; if we could
> convince ourselves that there normally wasn't any cost associated with a
> high default_statistics_target, we could increase the default, which
> would reduce the amount of traffic we'd see on -performance about bad
> query plans.

There seems to be *plenty* of evidence out there that the performance
penalty would NOT be "essentially zero."

Tom points out:  eqjoinsel(), for one, is O(N^2) in the number of MCV values kept.

It seems to me that there are cases where we can *REDUCE* the
histogram width, and if we do that, and then pick and choose the
columns where the width increases, the performance penalty may be
"yea, verily *actually* 0."

This fits somewhat with Simon Riggs' discussion earlier in the month
about Segment Exclusion; these both represent cases where it is quite
likely that there is emergent data in our tables that can help us to
better optimize our queries.
-- 
http://linuxfinances.info/info/linuxdistributions.html
"The definition of insanity is doing the same thing over and over and
expecting different results."  -- assortedly attributed to Albert
Einstein, Benjamin Franklin, Rita Mae Brown, and Rudyard Kipling


Re: [PATCHES] Better default_statistics_target

From
Decibel!
Date:
On Wed, Jan 30, 2008 at 09:13:37PM -0500, Christopher Browne wrote:
> There seems to be *plenty* of evidence out there that the performance
> penalty would NOT be "essentially zero."
>
> Tom points out:
>    eqjoinsel(), for one, is O(N^2) in the number of MCV values kept.
>
> It seems to me that there are cases where we can *REDUCE* the
> histogram width, and if we do that, and then pick and choose the
> columns where the width increases, the performance penalty may be
> "yea, verily *actually* 0."
>
> This fits somewhat with Simon Riggs' discussion earlier in the month
> about Segment Exclusion; these both represent cases where it is quite
> likely that there is emergent data in our tables that can help us to
> better optimize our queries.

This is all still hand-waving until someone actually measures what the
impact of the stats target is on planner time. I would suggest actually
measuring that before trying to invent more machinery. Besides, I think
you'll need that data for the machinery to make an intelligent decision
anyway...

BTW, with autovacuum I don't really see why we should care about how
long analyze takes, though perhaps it should have a throttle ala
vacuum_cost_delay.
--
Decibel!, aka Jim C. Nasby, Database Architect  decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Re: [PATCHES] Better default_statistics_target

From
Alvaro Herrera
Date:
Decibel! escribió:

> BTW, with autovacuum I don't really see why we should care about how
> long analyze takes, though perhaps it should have a throttle ala
> vacuum_cost_delay.

Analyze already has vacuum delay points (i.e. it is already throttled).

-- 
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.


Re: [PATCHES] Better default_statistics_target

From
"Kevin Grittner"
Date:
>>> On Wed, Jan 30, 2008 at  8:13 PM, in message
<d6d6637f0801301813n64fa58eu76385cf8a621907@mail.gmail.com>, "Christopher
Browne" <cbbrowne@gmail.com> wrote:
> There seems to be *plenty* of evidence out there that the performance
> penalty would NOT be "essentially zero."
I can confirm that I have had performance tank because of boosting
the statistics target for selected columns.  It appeared to be time
spent in the planning phase, not a bad plan choice.  Reducing the
numbers restored decent performance.
-Kevin




Re: [PATCHES] Better default_statistics_target

From
"Heikki Linnakangas"
Date:
Kevin Grittner wrote:
>>>> On Wed, Jan 30, 2008 at  8:13 PM, in message
> <d6d6637f0801301813n64fa58eu76385cf8a621907@mail.gmail.com>, "Christopher
> Browne" <cbbrowne@gmail.com> wrote: 
>  
>> There seems to be *plenty* of evidence out there that the performance
>> penalty would NOT be "essentially zero."
>  
> I can confirm that I have had performance tank because of boosting
> the statistics target for selected columns.  It appeared to be time
> spent in the planning phase, not a bad plan choice.  Reducing the
> numbers restored decent performance.

One idea I've been thinking about is to add a step after the analyze, to 
look at the statistics that was gathered. If it looks like the the 
distribution is pretty flat, reduce the data to a smaller set before 
storing it in pg_statistic.

You would still get the hit of longer ANALYZE time, but at least you 
would avoid the hit on query performance where the higher statistics are 
not helpful. We could also print an INFO line along the lines of "you 
might as well lower the statistics target for this table, because it's 
not helping".

No, I don't know how to determine when you could reduce the data, or how 
to reduce it...

--   Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com


Re: [PATCHES] Better default_statistics_target

From
Robert Treat
Date:
On Thursday 31 January 2008 09:55, Kevin Grittner wrote:
> >>> On Wed, Jan 30, 2008 at  8:13 PM, in message
>
> <d6d6637f0801301813n64fa58eu76385cf8a621907@mail.gmail.com>, "Christopher
>
> Browne" <cbbrowne@gmail.com> wrote:
> > There seems to be *plenty* of evidence out there that the performance
> > penalty would NOT be "essentially zero."
>
> I can confirm that I have had performance tank because of boosting
> the statistics target for selected columns.  It appeared to be time
> spent in the planning phase, not a bad plan choice.  Reducing the
> numbers restored decent performance.
>

Bad plans from boosting to 100 or less? Or something much higher? 


-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL


Re: [PATCHES] Better default_statistics_target

From
"Kevin Grittner"
Date:
>>> On Thu, Jan 31, 2008 at 10:19 PM, in message
<200801312319.59723.xzilla@users.sourceforge.net>, Robert Treat
<xzilla@users.sourceforge.net> wrote:
> On Thursday 31 January 2008 09:55, Kevin Grittner wrote:
>>
>> I can confirm that I have had performance tank because of boosting
>> the statistics target for selected columns.  It appeared to be time
>> spent in the planning phase, not a bad plan choice.  Reducing the
>> numbers restored decent performance.
>
> Bad plans from boosting to 100 or less? Or something much higher?
I boosted on a large number of columns based on domains.  County
number columns (present in most tables) were set to 80.  Some
columns were set all the way to 1000.  When performance tanked, we
didn't have time to experiment, so we just backed it all out.
Perhaps I could do some more controlled testing soon against 8.3,
to narrow it down and confirm the current status of the issue.  I
do seem to recall that simple queries weren't suffering, it was
those which joined many tables which had multiple indexes.
-Kevin




Re: [PATCHES] Better default_statistics_target

From
Tom Lane
Date:
"Kevin Grittner" <Kevin.Grittner@wicourts.gov> writes:
> On Thu, Jan 31, 2008 at 10:19 PM, in message
> <200801312319.59723.xzilla@users.sourceforge.net>, Robert Treat
> <xzilla@users.sourceforge.net> wrote: 
>> Bad plans from boosting to 100 or less? Or something much higher? 
> I boosted on a large number of columns based on domains.  County
> number columns (present in most tables) were set to 80.  Some
> columns were set all the way to 1000.  When performance tanked, we
> didn't have time to experiment, so we just backed it all out.
> Perhaps I could do some more controlled testing soon against 8.3,
> to narrow it down and confirm the current status of the issue.  I
> do seem to recall that simple queries weren't suffering, it was
> those which joined many tables which had multiple indexes.

That fits with the idea that eqjoinsel() is a main culprit.
        regards, tom lane


Re: [PATCHES] Better default_statistics_target

From
"Greg Sabino Mullane"
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


> As Tom stated it earlier, the ANALYZE slow down is far from being the
> only consequence. The planner will also have more work to do and
> that's the hard point IMHO.
>
> Without studying the impacts of this change on a large set of queries
> in different cases, it's quite hard to know for sure that it won't
> have a negative impact in a lot of cases.
>
> It's a bit too late in the cycle to change that IMHO, especially
> without any numbers.

The decision to add the magic "99/100" number was made without any
such analysis either, and I can assure you it has caused lots of real-world
problems. Going from 10 to 100 adds a small amount of planner overhead. The
99/100 change adds an order of magnitude speed difference to SELECT queries.
I still cannot see that as anything other than a major performance regression.

- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 200802032259
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iD8DBQFHpo3jvJuQZxSWSsgRA61dAJ4hglXzi/EQT08j/NSWl8UeqI9CigCcDxSs
ob//pk7+jTCWPKlssAYKmy8=
=VKhG
-----END PGP SIGNATURE-----