Re: [PERFORM] Bad n_distinct estimation; hacks suggested? - Mailing list pgsql-hackers

From Andrew Dunstan
Subject Re: [PERFORM] Bad n_distinct estimation; hacks suggested?
Date
Msg-id 426D565E.8040400@dunslane.net
Whole thread Raw
In response to Re: [PERFORM] Bad n_distinct estimation; hacks suggested?  (Josh Berkus <josh@agliodbs.com>)
Responses Re: [PERFORM] Bad n_distinct estimation; hacks suggested?  (Mischa Sandberg <mischa.sandberg@telus.net>)
List pgsql-hackers

Josh Berkus wrote:

>Simon, Tom:
>
>While it's not possible to get accurate estimates from a fixed size sample, I
>think it would be possible from a small but scalable sample: say, 0.1% of all
>data pages on large tables, up to the limit of maintenance_work_mem.
>
>Setting up these samples as a % of data pages, rather than a pure random sort,
>makes this more feasable; for example, a 70GB table would only need to sample
>about 9000 data pages (or 70MB).  Of course, larger samples would lead to
>better accuracy, and this could be set through a revised GUC (i.e.,
>maximum_sample_size, minimum_sample_size).
>
>I just need a little help doing the math ... please?
>
>


After some more experimentation, I'm wondering about some sort of
adaptive algorithm, a bit along the lines suggested by Marko Ristola,
but limited to 2 rounds.

The idea would be that we take a sample (either of fixed size, or some
small proportion of the table) , see how well it fits a larger sample
(say a few times the size of the first sample), and then adjust the
formula accordingly to project from the larger sample the estimate for
the full population. Math not worked out yet - I think we want to ensure
that the result remains bounded by [d,N].

cheers

andrew



pgsql-hackers by date:

Previous
From: "Joshua D. Drake"
Date:
Subject: Re: Constant WAL replay
Next
From: "Dave Held"
Date:
Subject: Re: [PERFORM] Bad n_distinct estimation; hacks suggested?