Re: [GENERAL] ANALYZE getting dead tuple count hopelessly wrong - Mailing list pgsql-hackers

From Tom Lane
Subject Re: [GENERAL] ANALYZE getting dead tuple count hopelessly wrong
Date
Msg-id 17953.1207177449@sss.pgh.pa.us
Whole thread Raw
In response to Re: [GENERAL] ANALYZE getting dead tuple count hopelessly wrong  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: [GENERAL] ANALYZE getting dead tuple count hopelessly wrong  ("Pavan Deolasee" <pavan.deolasee@gmail.com>)
List pgsql-hackers
I wrote:
> ... What I propose therefore
> is combining this patch with my older one so that ANALYZE counts
> according to the following rules:

> REDIRECT line pointer:        ignore
> DEAD line pointer:        count as dead
> HEAPTUPLE_LIVE tuple:        count as live, include in statistics pool
> HEAPTUPLE_DEAD:            count as dead
> HEAPTUPLE_RECENTLY_DEAD:    count as dead
> HEAPTUPLE_INSERT_IN_PROGRESS:    ignore
> HEAPTUPLE_DELETE_IN_PROGRESS:    count as live

While working on this I realized that there's a special case ANALYZE has
to face that is not faced by VACUUM: it might see tuples inserted or
deleted by its own transaction.  For example consider
begin;... load lots of data into mytable ...analyze mytable;... issue complex queries against mytable ...commit;

This is not an uncommon scenario, particularly with respect to temporary
tables.  ANALYZE's historical behavior of sampling everything that's
good according to SnapshotNow does the right thing here, but ignoring
INSERT_IN_PROGRESS tuples would not.

The right way seems to be to treat our own insertions as live during
ANALYZE, but then subtract off our own pending insertions from the
live-tuples count sent to the stats collector.  pgstat_report_analyze()
can handle the latter part by groveling through the backend's pending
statistics data.

Comments?
        regards, tom lane


pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: [GENERAL] SHA1 on postgres 8.3
Next
From: Decibel!
Date:
Subject: Re: writing a MIN(RECORD) aggregate