Re: estimating # of distinct values - Mailing list pgsql-hackers

From tv@fuzzy.cz
Subject Re: estimating # of distinct values
Date
Msg-id d7cb4a682509456a0a51a994cfdc138c.squirrel@sq.gransy.com
Whole thread Raw
In response to Re: estimating # of distinct values  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
Responses Re: estimating # of distinct values  (Josh Berkus <josh@agliodbs.com>)
List pgsql-hackers
> <tv@fuzzy.cz> wrote:
>
>> So even with 10% of the table, there's a 10% probability to get an
>> estimate that's 7x overestimated or underestimated. With lower
>> probability the interval is much wider.
>
> Hmmm...  Currently I generally feel I'm doing OK when the estimated
> rows for a step are in the right order of magnitude -- a 7% error
> would be a big improvement in most cases.  Let's not lose track of
> the fact that these estimates are useful even when they are not
> dead-on accurate.

Well, but that's not 7%, thats 7x! And the theorem says 'greater or equal'
so this is actually the minimum - you can get a much bigger difference
with lower probability. So you can easily get an estimate that is a few
orders off.

Anyway I really don't want precise values, just a reasonable estimate. As
I said, we could use the AE estimate they proposed in the paper. It has
the nice feature that it actually reaches the low boundary (thus the
inequality changes to equality). The downside is that there are estimators
with better behavior on some datasets.

Tomas



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: page compression
Next
From: Joel Jacobson
Date:
Subject: pg_dump --split patch