Re: ANALYZE sampling is too good - Mailing list pgsql-hackers

From Jeff Janes
Subject Re: ANALYZE sampling is too good
Date
Msg-id CAMkU=1weFZ-k=z2Utu=kTHe7R5eqR45ujWdNVGC+UHU7n+RZNw@mail.gmail.com
Whole thread Raw
In response to Re: ANALYZE sampling is too good  (Simon Riggs <simon@2ndQuadrant.com>)
List pgsql-hackers
On Tuesday, December 10, 2013, Simon Riggs wrote:
On 11 December 2013 00:28, Greg Stark <stark@mit.edu> wrote:
>    On Wed, Dec 11, 2013 at 12:14 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
>> Block sampling, with parameter to specify sample size. +1
>
> Simon this is very frustrating. Can you define "block sampling"?

Blocks selected using Vitter's algorithm, using a parameterised
fraction of the total.

OK, thanks for defining that.

We only need Vitter's algorithm when we don't know in advance how many items we are sampling from (such as for tuples--unless we want to rely on the previous estimate for the current round of analysis).  But for blocks, we do know how many there are, so there are simpler ways to pick them.
 

When we select a block we should read all rows on that block, to help
identify the extent of clustering within the data.

But we have no mechanism to store such information (or to use it if it were stored), nor even ways to prevent the resulting skew in the sample from seriously messing up the estimates which we do have ways of storing and using.

Cheers,

Jeff

pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: ANALYZE sampling is too good
Next
From: Jeff Janes
Date:
Subject: Re: Why we are going to have to go DirectIO