Re: too many clog files - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: too many clog files
Date
Msg-id dcc563d10809050939y1590a0b6y93d7b98bbcff8548@mail.gmail.com
Whole thread Raw
In response to Re: too many clog files  (Duan Ligong <duanlg@nec-as.nec.com.cn>)
List pgsql-performance
On Thu, Sep 4, 2008 at 8:58 PM, Duan Ligong <duanlg@nec-as.nec.com.cn> wrote:
> Thanks for your reply.
>
> Greg wrote:
>> On Tue, 2 Sep 2008, Duan Ligong wrote:
>> > - Does Vacuum delete the old clog files?
>>
>> Yes, if those transactions are all done.  One possibility here is that
>> you've got some really long-running transaction floating around that is
>> keeping normal clog cleanup from happening.  Take a look at the output
>> from "select * from pg_stat_activity" and see if there are any really old
>> transactions floating around.
>
> Well, we could not wait so long and just moved the old clog files.
> The postgresql system is running well.
> But now the size of pg_clog has exceeded 50MB and there
> are 457 clog files.

That is absolutely not the thing to do.  Put them back, and do a
dump-restore on the database if you need to save a few hundred megs on
the drive.  Deleting files from underneath postgresql is a great way
to break your database in new and interesting ways which are often
fatal to your data.

pgsql-performance by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: too many clog files
Next
From: Greg Smith
Date:
Subject: Re: You may need to increase mas_loks_per_trasaction