Re: Overhauling GUCS - Mailing list pgsql-hackers

From Gregory Stark
Subject Re: Overhauling GUCS
Date
Msg-id 87d4mqwdm8.fsf@oxford.xeocode.com
Whole thread Raw
In response to Re: Overhauling GUCS  (Josh Berkus <josh@agliodbs.com>)
Responses Re: Overhauling GUCS  ("Hakan Kocaman" <hkocam@googlemail.com>)
Re: Overhauling GUCS  (Josh Berkus <josh@agliodbs.com>)
List pgsql-hackers
"Josh Berkus" <josh@agliodbs.com> writes:

> Where analyze does systematically fall down is with databases over 500GB in
> size, but that's not a function of d_s_t but rather of our tiny sample size.

Speak to the statisticians. Our sample size is calculated using the same
theory behind polls which sample 600 people to learn what 250 million people
are going to do on election day. You do NOT need (significantly) larger
samples for larger populations.

In fact where those polls have difficulty is the same place we have some
problems. For *smaller* populations like individual congressional races you
need to have nearly the same 600 sample for each of those small races. That
adds up to a lot more than 600 total. In our case it means when queries cover
a range much less than a whole bucket then the confidence interval increases
too.

Also, our estimates for n_distinct are very unreliable. The math behind
sampling for statistics just doesn't work the same way for properties like
n_distinct. For that Josh is right, we *would* need a sample size proportional
to the whole data set which would practically require us to scan the whole
table (and have a technique for summarizing the results in a nearly constant
sized data structure).

--  Gregory Stark EnterpriseDB          http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support!


pgsql-hackers by date:

Previous
From: Zdenek Kotala
Date:
Subject: Re: handling TOAST tables in autovacuum
Next
From: Gregory Stark
Date:
Subject: Re: pg_dump restore time and Foreign Keys