Re: Autoanalyze and OldestXmin - Mailing list pgsql-hackers

From Pavan Deolasee
Subject Re: Autoanalyze and OldestXmin
Date
Msg-id BANLkTik9XipfvpCbaO2e4cH-XsYV2gSDDQ@mail.gmail.com
Whole thread Raw
In response to Re: Autoanalyze and OldestXmin  (Pavan Deolasee <pavan.deolasee@gmail.com>)
List pgsql-hackers


On Thu, Jun 9, 2011 at 11:50 AM, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:



Ah, I see. Would there will be benefits if we can do some special handling for cases where we know that ANALYZE is running outside a transaction block and that its not going to invoke any user-defined functions ? If user is running ANALYZE inside a transaction block, he is probably already aware and ready to handle long-running transaction. But running them under the covers as part of auto-analyze does not see quite right. The pgbench test already shows the severe bloat that a long running analyze may cause for small tables and many wasteful vacuum runs on those tables.

Another idea would be to split the ANALYZE into multiple small transactions, each taking a new snapshot. That might result in bad statistics if the table is undergoing huge change, but in that case, the stats will be outdated soon anyways if we run with a old snapshot. I understand there could be issues like counting the same tuple twice or more, but would that be a common case to worry about ?


FWIW I searched the archives again and seems like ITAGAKI Takahiro complained about the same issue in the past and had some ideas (including splitting one long transaction). We did not conclude the discussions that time, but I hope we make some progress this time unless we are certain that there are no low-hanging fruits here.


Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB     http://www.enterprisedb.com

pgsql-hackers by date:

Previous
From: Simon Riggs
Date:
Subject: Re: reducing the overhead of frequent table locks - now, with WIP patch
Next
From: Shigeru Hanada
Date:
Subject: FOREIGN TABLE doc fix