Thread: Re: [PERFORM] Bad n_distinct estimation; hacks suggested?

Re: [PERFORM] Bad n_distinct estimation; hacks suggested?

From
Greg Stark
Date:
This one looks *really* good. 
http://www.aladdin.cs.cmu.edu/papers/pdfs/y2001/dist_sampl.pdf

It does require a single full table scan but it works in O(n) time and
constant space and it guarantees the confidence intervals for the estimates it
provides like the histograms do for regular range scans.

It can even keep enough data to provide estimates for n_distinct when
unrelated predicates are applied. I'm not sure Postgres would want to do this
though; this seems like it's part of the cross-column correlation story more
than the n_distinct story. It seems to require keeping an entire copy of the
sampled record in the stats tables which would be prohibitive quickly in wide
tables (it would be O(n^2) storage in the number of columns) .

It also seems like a lot of work to implement. Nothing particular that would
be impossible, but it does require storing a moderately complex data
structure. Perhaps Postgres's new support for data structures will make this
easier.

-- 
greg



Re: [PERFORM] Bad n_distinct estimation; hacks suggested?

From
Greg Stark
Date:
Rod Taylor <rbt@sitesell.com> writes:

> On Tue, 2005-04-26 at 19:03 -0400, Greg Stark wrote:
> > This one looks *really* good. 
> > 
> >  http://www.aladdin.cs.cmu.edu/papers/pdfs/y2001/dist_sampl.pdf
> > 
> > It does require a single full table scan 
> 
> Ack.. Not by default please.
> 
> I have a few large append-only tables (vacuum isn't necessary) which do
> need stats rebuilt periodically.

The algorithm can also naturally be implemented incrementally. Which would be
nice for your append-only tables. But that's not Postgres's current philosophy
with statistics. Perhaps some trigger function that you could install yourself
to update statistics for a newly inserted record would be useful.


The paper is pretty straightforward and easy to read, but here's an executive
summary:

The goal is to gather a uniform sample of *distinct values* in the table as
opposed to a sample of records.

Instead of using a fixed percentage sampling rate for each record, use a hash
of the value to determine whether to include it. At first include everything,
but if the sample space overflows throw out half the values based on their
hash value. Repeat until finished.

In the end you'll have a sample of 1/2^n of your distinct values from your
entire data set where n is large enough for you sample to fit in your
predetermined constant sample space.

-- 
greg



Re: [PERFORM] Bad n_distinct estimation; hacks suggested?

From
Rod Taylor
Date:
On Tue, 2005-04-26 at 19:28 -0400, Greg Stark wrote:
> Rod Taylor <rbt@sitesell.com> writes:
> 
> > On Tue, 2005-04-26 at 19:03 -0400, Greg Stark wrote:
> > > This one looks *really* good. 
> > > 
> > >  http://www.aladdin.cs.cmu.edu/papers/pdfs/y2001/dist_sampl.pdf
> > > 
> > > It does require a single full table scan 
> > 
> > Ack.. Not by default please.
> > 
> > I have a few large append-only tables (vacuum isn't necessary) which do
> > need stats rebuilt periodically.
> 
> The algorithm can also naturally be implemented incrementally. Which would be
> nice for your append-only tables. But that's not Postgres's current philosophy
> with statistics. Perhaps some trigger function that you could install yourself
> to update statistics for a newly inserted record would be useful.

If when we have partitions, that'll be good enough. If partitions aren't
available this would be quite painful to anyone with large tables --
much as the days of old used to be painful for ANALYZE.

-- 



Re: [PERFORM] Bad n_distinct estimation; hacks suggested?

From
Tom Lane
Date:
Rod Taylor <pg@rbt.ca> writes:
> If when we have partitions, that'll be good enough. If partitions aren't
> available this would be quite painful to anyone with large tables --
> much as the days of old used to be painful for ANALYZE.

Yeah ... I am very un-enthused about these suggestions to make ANALYZE
go back to doing a full scan ...
        regards, tom lane


Re: [PERFORM] Bad n_distinct estimation; hacks suggested?

From
Greg Stark
Date:
Tom Lane <tgl@sss.pgh.pa.us> writes:

> Rod Taylor <pg@rbt.ca> writes:
> > If when we have partitions, that'll be good enough. If partitions aren't
> > available this would be quite painful to anyone with large tables --
> > much as the days of old used to be painful for ANALYZE.
> 
> Yeah ... I am very un-enthused about these suggestions to make ANALYZE
> go back to doing a full scan ...

Well one option would be to sample only a small number of records, but add the
data found from those records to the existing statistics. This would make
sense for a steady-state situation, but make it hard to recover from a drastic
change in data distribution. I think in the case of n_distinct it would also
bias the results towards underestimating n_distinct but perhaps that could be
corrected for.

But I'm unclear for what situation this is a concern. 

For most use cases users have to run vacuum occasionally. In those cases
"vacuum analyze" would be no worse than a straight normal vacuum. Note that
this algorithm doesn't require storing more data because of the large scan or
performing large sorts per column. It's purely O(n) time and O(1) space.

On the other hand, if you have tables you aren't vacuuming that means you
perform zero updates or deletes. In which case some sort of incremental
statistics updating would be a good solution. A better solution even than
sampling.

-- 
greg



Re: [PERFORM] Bad n_distinct estimation; hacks suggested?

From
Rod Taylor
Date:
On Tue, 2005-04-26 at 19:03 -0400, Greg Stark wrote:
> This one looks *really* good. 
> 
>  http://www.aladdin.cs.cmu.edu/papers/pdfs/y2001/dist_sampl.pdf
> 
> It does require a single full table scan 

Ack.. Not by default please.

I have a few large append-only tables (vacuum isn't necessary) which do
need stats rebuilt periodically.

Lets just say that we've been working hard to upgrade to 8.0 primarily
because pg_dump was taking over 18 hours to make a backup.

-- 
Rod Taylor <rbt@sitesell.com>