Re: [PERFORM] Bad n_distinct estimation; hacks suggested? - Mailing list pgsql-hackers

From Mischa Sandberg
Subject Re: [PERFORM] Bad n_distinct estimation; hacks suggested?
Date
Msg-id 1114580284.426f253cc0087@webmail.telus.net
Whole thread Raw
In response to Re: [PERFORM] Bad n_distinct estimation; hacks suggested?  (Andrew Dunstan <andrew@dunslane.net>)
Responses Re: [PERFORM] Bad n_distinct estimation; hacks suggested?
List pgsql-hackers
Quoting Andrew Dunstan <andrew@dunslane.net>:

> After some more experimentation, I'm wondering about some sort of
> adaptive algorithm, a bit along the lines suggested by Marko
Ristola, but limited to 2 rounds.
>
> The idea would be that we take a sample (either of fixed size, or
> some  small proportion of the table) , see how well it fits a larger
sample
> > (say a few times the size of the first sample), and then adjust
the > formula accordingly to project from the larger sample the
estimate for the full population. Math not worked out yet - I think we
want to ensure that the result remains bounded by [d,N].

Perhaps I can save you some time (yes, I have a degree in Math). If I
understand correctly, you're trying extrapolate from the correlation
between a tiny sample and a larger sample. Introducing the tiny sample
into any decision can only produce a less accurate result than just
taking the larger sample on its own; GIGO. Whether they are consistent
with one another has no relationship to whether the larger sample
correlates with the whole population. You can think of the tiny sample
like "anecdotal" evidence for wonderdrugs.
--
"Dreams come true, not free." -- S.Sondheim, ITW


pgsql-hackers by date:

Previous
From: Christopher Kings-Lynne
Date:
Subject: Re: Disable large objects GUC
Next
From: Greg Stark
Date:
Subject: Re: [PERFORM] Bad n_distinct estimation; hacks suggested?