Re: reducing statistics write overhead - Mailing list pgsql-hackers

From Euler Taveira de Oliveira
Subject Re: reducing statistics write overhead
Date
Msg-id 49794753.90902@timbira.com
Whole thread Raw
In response to Re: reducing statistics write overhead  (Alvaro Herrera <alvherre@commandprompt.com>)
Responses Re: reducing statistics write overhead  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
Alvaro Herrera escreveu:
> Euler Taveira de Oliveira escribió:
>> Alvaro Herrera escreveu:
>>> This could be solved if the workers kept the whole history of tables
>>> that they have vacuumed.  Currently we keep only a single table (the one
>>> being vacuumed right now).  I proposed writing these history files back
>>> when workers were first implemented, but the idea was shot down before
>>> flying very far because it was way too complex (the rest of the patch
>>> was more than complex enough.)  Maybe we can implement this now.
>>>
>> [I don't remember your proposal...] Isn't it just add a circular linked list
>> at AutoVacuumShmemStruct? Of course some lock mechanism needs to exist to
>> guarantee that we don't write at the same time. The size of this linked list
>> would be scale by a startup-time-guc or a reasonable fixed value.
> 
> Well, the problem is precisely how to size the list.  I don't like the
> idea of keeping an arbitrary number in memory; it adds another
> mostly-useless tunable that we'll need to answer questions about for all
> eternity.
> 
[Poking the code a little...] You're right. We could do that but it isn't an
elegant solution. What about tracking that information at table_oids?

struct table_oids {bool skipit;    /* initially false */Oid relid;
};


--  Euler Taveira de Oliveira http://www.timbira.com/


pgsql-hackers by date:

Previous
From: Bernd Helmle
Date:
Subject: Re: pg_get_viewdef formattiing
Next
From: Tom Lane
Date:
Subject: Re: reducing statistics write overhead