Re: [PERFORM] Bad n_distinct estimation; hacks suggested? - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: [PERFORM] Bad n_distinct estimation; hacks suggested?
Date
Msg-id 1114454941.21529.245.camel@localhost.localdomain
Whole thread Raw
In response to Re: [PERFORM] Bad n_distinct estimation; hacks suggested?  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: [PERFORM] Bad n_distinct estimation; hacks suggested?  (Josh Berkus <josh@agliodbs.com>)
Re: [PERFORM] Bad n_distinct estimation; hacks suggested?  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Mon, 2005-04-25 at 11:23 -0400, Tom Lane wrote:
> Simon Riggs <simon@2ndquadrant.com> writes:
> > My suggested hack for PostgreSQL is to have an option to *not* sample,
> > just to scan the whole table and find n_distinct accurately.
> > ...
> > What price a single scan of a table, however large, when incorrect
> > statistics could force scans and sorts to occur when they aren't
> > actually needed ?
>
> It's not just the scan --- you also have to sort, or something like
> that, if you want to count distinct values.  I doubt anyone is really
> going to consider this a feasible answer for large tables.

Assuming you don't use the HashAgg plan, which seems very appropriate
for the task? (...but I understand the plan otherwise).

If that was the issue, then why not keep scanning until you've used up
maintenance_work_mem with hash buckets, then stop and report the result.

The problem is if you don't do the sort once for statistics collection
you might accidentally choose plans that force sorts on that table. I'd
rather do it once...

The other alternative is to allow an ALTER TABLE command to set
statistics manually, but I think I can guess what you'll say to that!

Best Regards, Simon Riggs


pgsql-hackers by date:

Previous
From: "Michael Paesold"
Date:
Subject: Re: [PATCHES] Continue transactions after errors in psql
Next
From: Josh Berkus
Date:
Subject: Re: [PERFORM] Bad n_distinct estimation; hacks suggested?