Thread: Autoanalyze and OldestXmin

Autoanalyze and OldestXmin

From
Pavan Deolasee
Date:

Hi All,

I was running some pgbench tests and observed this phenomenon. This might be a known issue, but I thought its nevertheless worth mentioning.

Auto-analyze process grabs a MVCC snapshot. If it runs on a very large table, it may take considerable time and would stop the OldestXmin from advancing. During that time, if there are highly updated small tables, those would bloat a lot. For example, in the attached log snippet (and the HEAD is patched a bit to produce more information than what you would otherwise see), for a scale factor of 50 and 50 clients:

branches and tellers tables, which had a stable size of around 65 and 90 pages respectively, bloat to 402 and 499 pages respectively when accounts table is being analyzed. The accounts table analyze takes around 5 mins on my decent server and the branches and tellers tables keep bloating during that time. If these small tables are very actively accessed, vacuum may not be able to even truncate them later, once OldestXmin advances at the end of auto analyze.

I understand analyze needs snapshot to run index predicate functions, but is there something we can do  ? There is a PROC_IN_ANALYZE flag, but we don't seem to be using it anywhere.  Since acquire_sample_rows() returns palloced tuples, can't we let OldestXmin advance after scanning a page by ignoring procs with the flag set, just like we do for PROC_IN_VACUUM ? 

Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB     http://www.enterprisedb.com
Attachment

Re: Autoanalyze and OldestXmin

From
Greg Stark
Date:
<p><br /> On Jun 8, 2011 1:49 PM, "Pavan Deolasee" <<a
href="mailto:pavan.deolasee@gmail.com">pavan.deolasee@gmail.com</a>>wrote:<br /> ><br /> ><br /> > Hi
All,<br/> > There is a PROC_IN_ANALYZE flag, but we don't seem to be using it anywhere.  Since acquire_sample_rows()
returnspalloced tuples, can't we let OldestXmin advance after scanning a page by ignoring procs with the flag set, just
likewe do for PROC_IN_VACUUM ? <p>I don't even think the pallocing of tuples is a necessary condition. The key
requirementis that this process will not access any other tables in this snapshot. In which case we don't need to take
itinto account when vacuuming other tables.<p>It's not safe to vacuum tuples from the table being analyzed because the
vacuumcould get ahead of the analyze.<p>This is kind of like the other property it would be nice to know about
transactions:that they've locked all the tables they're going to lock. That would be sufficient but overly strong test.
It'spossible to know that if other tables are accessed they'll be with a brand new snapshot. 

Re: Autoanalyze and OldestXmin

From
Pavan Deolasee
Date:


On Wed, Jun 8, 2011 at 9:03 PM, Greg Stark <stark@mit.edu> wrote:


On Jun 8, 2011 1:49 PM, "Pavan Deolasee" <pavan.deolasee@gmail.com> wrote:
>
>
> Hi All,


> There is a PROC_IN_ANALYZE flag, but we don't seem to be using it anywhere.  Since acquire_sample_rows() returns palloced tuples, can't we let OldestXmin advance after scanning a page by ignoring procs with the flag set, just like we do for PROC_IN_VACUUM ? 

I don't even think the pallocing of tuples is a necessary condition. The key requirement is that this process will not access any other tables in this snapshot. In which case we don't need to take it into account when vacuuming other tables.


I first thought that analyze and vacuum can not run concurrently on the same table since they take a conflicting lock on the table. So even if we ignore the analyze process while calculating the OldestXmin for vacuum, we should be fine since we know they are working on different tables. But I see analyze also acquires sample rows from the inherited tables with a non-conflicting lock. I probably do not understand the analyze code well, but is that the reason why we can't ignore analyze snapshot while determining OldestXmin for vacuum ?

It's not safe to vacuum tuples from the table being analyzed because the vacuum could get ahead of the analyze.

 
What can go wrong if that happens ? Is the worry that we might get stale analyze result or are there more serious issues to deal with ?
 

This is kind of like the other property it would be nice to know about transactions: that they've locked all the tables they're going to lock. That would be sufficient but overly strong test. It's possible to know that if other tables are accessed they'll be with a brand new snapshot.

I definitely do that understand this :-)

Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB     http://www.enterprisedb.com

Re: Autoanalyze and OldestXmin

From
Tom Lane
Date:
Pavan Deolasee <pavan.deolasee@gmail.com> writes:
> I first thought that analyze and vacuum can not run concurrently on the same
> table since they take a conflicting lock on the table. So even if we ignore
> the analyze process while calculating the OldestXmin for vacuum, we should
> be fine since we know they are working on different tables. But I see
> analyze also acquires sample rows from the inherited tables with a
> non-conflicting lock. I probably do not understand the analyze code well,
> but is that the reason why we can't ignore analyze snapshot while
> determining OldestXmin for vacuum ?

The reason why we can't ignore that snapshot is that it's being set for
the use of user-defined functions, which might do practically anything.
They definitely could access tables other than the one under analysis.
(I believe that PostGIS does such things, for example --- it wants to
look at its auxiliary tables for metadata.)

Also keep in mind that we allow ANALYZE to be run inside a transaction
block, which might contain other operations sharing the same snapshot.
        regards, tom lane


Re: Autoanalyze and OldestXmin

From
Jim Nasby
Date:
On Jun 8, 2011, at 10:33 AM, Greg Stark wrote:
> This is kind of like the other property it would be nice to know about transactions: that they've locked all the
tablesthey're going to lock. 
That sounds like something I've wanted for a very long time: the ability for a transaction to say exactly what tables
it'sgoing to access. Presumably disallowing it from taking out any more table locks (anything you do on a table needs
atleast a share lock, right?) would take care of that. 

If we had that information vacuum could ignore the old snapshots on those tables, so long as it ensures that the vacuum
processitself can't read anything from those tables (handling the functional index issue Tom mentioned). 
--
Jim C. Nasby, Database Architect                   jim@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net




Re: Autoanalyze and OldestXmin

From
Pavan Deolasee
Date:


On Wed, Jun 8, 2011 at 10:45 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Pavan Deolasee <pavan.deolasee@gmail.com> writes:
> I first thought that analyze and vacuum can not run concurrently on the same
> table since they take a conflicting lock on the table. So even if we ignore
> the analyze process while calculating the OldestXmin for vacuum, we should
> be fine since we know they are working on different tables. But I see
> analyze also acquires sample rows from the inherited tables with a
> non-conflicting lock. I probably do not understand the analyze code well,
> but is that the reason why we can't ignore analyze snapshot while
> determining OldestXmin for vacuum ?

The reason why we can't ignore that snapshot is that it's being set for
the use of user-defined functions, which might do practically anything.
They definitely could access tables other than the one under analysis.
(I believe that PostGIS does such things, for example --- it wants to
look at its auxiliary tables for metadata.)

Also keep in mind that we allow ANALYZE to be run inside a transaction
block, which might contain other operations sharing the same snapshot.


Ah, I see. Would there will be benefits if we can do some special handling for cases where we know that ANALYZE is running outside a transaction block and that its not going to invoke any user-defined functions ? If user is running ANALYZE inside a transaction block, he is probably already aware and ready to handle long-running transaction. But running them under the covers as part of auto-analyze does not see quite right. The pgbench test already shows the severe bloat that a long running analyze may cause for small tables and many wasteful vacuum runs on those tables.

Another idea would be to split the ANALYZE into multiple small transactions, each taking a new snapshot. That might result in bad statistics if the table is undergoing huge change, but in that case, the stats will be outdated soon anyways if we run with a old snapshot. I understand there could be issues like counting the same tuple twice or more, but would that be a common case to worry about ?

Thanks,
Pavan 

--
Pavan Deolasee
EnterpriseDB     http://www.enterprisedb.com

Re: Autoanalyze and OldestXmin

From
Pavan Deolasee
Date:


On Thu, Jun 9, 2011 at 11:50 AM, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:



Ah, I see. Would there will be benefits if we can do some special handling for cases where we know that ANALYZE is running outside a transaction block and that its not going to invoke any user-defined functions ? If user is running ANALYZE inside a transaction block, he is probably already aware and ready to handle long-running transaction. But running them under the covers as part of auto-analyze does not see quite right. The pgbench test already shows the severe bloat that a long running analyze may cause for small tables and many wasteful vacuum runs on those tables.

Another idea would be to split the ANALYZE into multiple small transactions, each taking a new snapshot. That might result in bad statistics if the table is undergoing huge change, but in that case, the stats will be outdated soon anyways if we run with a old snapshot. I understand there could be issues like counting the same tuple twice or more, but would that be a common case to worry about ?


FWIW I searched the archives again and seems like ITAGAKI Takahiro complained about the same issue in the past and had some ideas (including splitting one long transaction). We did not conclude the discussions that time, but I hope we make some progress this time unless we are certain that there are no low-hanging fruits here.


Thanks,
Pavan

--
Pavan Deolasee
EnterpriseDB     http://www.enterprisedb.com

Re: Autoanalyze and OldestXmin

From
Robert Haas
Date:
On Thu, Jun 9, 2011 at 2:20 AM, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:
> Ah, I see. Would there will be benefits if we can do some special handling
> for cases where we know that ANALYZE is running outside a transaction block
> and that its not going to invoke any user-defined functions ?

We'd have to distinguish between user-defined typanalyze functions and
system-defined typanalyze functions, which doesn't seem to appealing,
or robust.

> If user is
> running ANALYZE inside a transaction block, he is probably already aware and
> ready to handle long-running transaction. But running them under the covers
> as part of auto-analyze does not see quite right. The pgbench test already
> shows the severe bloat that a long running analyze may cause for small
> tables and many wasteful vacuum runs on those tables.
> Another idea would be to split the ANALYZE into multiple small transactions,
> each taking a new snapshot. That might result in bad statistics if the table
> is undergoing huge change, but in that case, the stats will be outdated soon
> anyways if we run with a old snapshot. I understand there could be issues
> like counting the same tuple twice or more, but would that be a common case
> to worry about ?

I am wondering if we shouldn't be asking ourselves a different
question: why is ANALYZE running long enough on your tables for this
to become an issue?  How long is it taking?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Autoanalyze and OldestXmin

From
Pavan Deolasee
Date:

>
> I am wondering if we shouldn't be asking ourselves a different
> question: why is ANALYZE running long enough on your tables for this
> to become an issue?  How long is it taking?
>

The log file attached in the first post has the details; it's taking around 5 mins for the accounts table with 50 scale
factorand 50 clients 

Thanks,
Pavan

Re: Autoanalyze and OldestXmin

From
Robert Haas
Date:
On Thu, Jun 9, 2011 at 10:52 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:
>> I am wondering if we shouldn't be asking ourselves a different
>> question: why is ANALYZE running long enough on your tables for this
>> to become an issue?  How long is it taking?
>
> The log file attached in the first post has the details; it's taking around 5 mins for the accounts table with 50
scalefactor and 50 clients 

Wow, that's slow.  Still, what if the user were doing a transaction of
comparable size?  It's not like ANALYZE is doing a gigantic amount of
work.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Autoanalyze and OldestXmin

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> On Thu, Jun 9, 2011 at 10:52 AM, Pavan Deolasee
> <pavan.deolasee@gmail.com> wrote:
>>> I am wondering if we shouldn't be asking ourselves a different
>>> question: why is ANALYZE running long enough on your tables for this
>>> to become an issue? �How long is it taking?

>> The log file attached in the first post has the details; it's taking around 5 mins for the accounts table with 50
scalefactor and 50 clients
 

> Wow, that's slow.  Still, what if the user were doing a transaction of
> comparable size?  It's not like ANALYZE is doing a gigantic amount of
> work.

I wonder what vacuum cost delay settings are in use ...
        regards, tom lane


Re: Autoanalyze and OldestXmin

From
Pavan Deolasee
Date:

On 09-Jun-2011, at 8:29 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Robert Haas <robertmhaas@gmail.com> writes:
>> On Thu, Jun 9, 2011 at 10:52 AM, Pavan Deolasee
>> <pavan.deolasee@gmail.com> wrote:
>>>> I am wondering if we shouldn't be asking ourselves a different
>>>> question: why is ANALYZE running long enough on your tables for this
>>>> to become an issue?  How long is it taking?
>
>>> The log file attached in the first post has the details; it's taking around 5 mins for the accounts table with 50
scalefactor and 50 clients 
>
>> Wow, that's slow.  Still, what if the user were doing a transaction of
>> comparable size?  It's not like ANALYZE is doing a gigantic amount of
>> work.
>
> I wonder what vacuum cost delay settings are in use ...
>

Default settings with 512Mb shared buffers

Thanks.
Pavan