Re: pretty bad n_distinct estimate, causing HashAgg OOM on TPC-H - Mailing list pgsql-hackers

From Jeff Janes
Subject Re: pretty bad n_distinct estimate, causing HashAgg OOM on TPC-H
Date
Msg-id CAMkU=1wOk0WJjyUU0ZboZpdpSenGvhPXUW_K+vciorW9+tGaqg@mail.gmail.com
Whole thread Raw
In response to Re: pretty bad n_distinct estimate, causing HashAgg OOM on TPC-H  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Responses Re: pretty bad n_distinct estimate, causing HashAgg OOM on TPC-H  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
List pgsql-hackers
On Sat, Jun 20, 2015 at 8:28 AM, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:

Hi Tomas,


I've lobotomized the sampling a bit to really produce a random set of blocks first, and that produces way better estimates:

   statistics target     estimate               random
   -----------------------------------------------------------------
   100                     429491 (10000x)   334430766 (14x)
   1000                   4240418  (1000x)   439010499 (10x)

Also, the number of sampled blocks is not that different. With target 100, the current sampler reads ~2900 blocks, while a completely random sampler uses 3000 blocks. So, where's the benefit?

I don't know you did.  The block sampling is already random, unless there is some overlooked bug, it can't be made more random.  Could you post the patch?

As I understand it, with a target of 100, it should be sampling exactly 30,000 blocks.  Some of those blocks will end up having no rows chosen from them (just by chance), but the blocks were still read.  If a few thousand of those blocks end up with no tuples in the final sample, the motivation for that was not to save IO, is just an artifact of how the random sampling work.

Thanks,

Jeff

pgsql-hackers by date:

Previous
From: Jeff Janes
Date:
Subject: Re: pretty bad n_distinct estimate, causing HashAgg OOM on TPC-H
Next
From: Fabien COELHO
Date:
Subject: Re: checkpointer continuous flushing