Thread: too many clog files

too many clog files

From
Duan Ligong
Date:
Hi, there.

I have encountered an issue that there are too many
clog file under the .../pg_clog/ directory. Some of them
were even produced one month ago.

My questions:
- Does Vacuum delete the old clog files?
- Can we controll the maximum number of the clog files?
- When, or in what case is  a new clog file produced?
- Is there a mechanism that the clog files are recycled?

#The version of my postgresql is 8.1

Regards
Duan



Re: too many clog files

From
Alvaro Herrera
Date:
Duan Ligong wrote:
> Hi, there.
>
> I have encountered an issue that there are too many
> clog file under the .../pg_clog/ directory. Some of them
> were even produced one month ago.

If you're going to repost a question, it is only polite that you link to
the answers already provided.  Particularly so when some of your
questions were already answered.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

Re: too many clog files

From
Duan Ligong
Date:
Alvaro Herrera wrote:
> > Duan Ligong wrote:
> > I have encountered an issue that there are too many
> > clog file under the .../pg_clog/ directory. Some of them
> > were even produced one month ago.
>
> If you're going to repost a question, it is only polite that you link to
> the answers already provided.  Particularly so when some of your
> questions were already answered.

OK, the following was the  answer  from you.
http://archives.postgresql.org/pgsql-performance/2008-08/msg00346.php
I appreciate your reply,  but please
don't be such instructive except on postgresql.

In fact , I didn't get a satisfactory answer from you because
"Sorry, you ask more questions that I have time to answer right now."

So I tried to make my questions less and simpler as follows:
- Does Vacuum delete the old clog files?
- Can we controll the maximum number of the clog files?
- When, or in what case is  a new clog file produced?
- Is there a mechanism that the clog files are recycled? If there is ,
what is the mechanism?

#The version of my postgresql is 8.1.
I have encountered an issue that there are too many
clog file under the .../pg_clog/ directory. Some of them
were even produced one month ago.

Thanks
Duan


>
> --
> Alvaro Herrera                                http://www.CommandPrompt.com/
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance




Re: too many clog files

From
Greg Smith
Date:
On Tue, 2 Sep 2008, Duan Ligong wrote:

> - Does Vacuum delete the old clog files?

Yes, if those transactions are all done.  One possibility here is that
you've got some really long-running transaction floating around that is
keeping normal clog cleanup from happening.  Take a look at the output
from "select * from pg_stat_activity" and see if there are any really old
transactions floating around.

> - Can we controll the maximum number of the clog files?

The reference Alvaro suggested at
http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND
goes over that.  If you reduce autovacuum_freeze_max_age, that reduces the
expected maximum size of the clog files.  "The default, 200 million
transactions, translates to about 50MB of pg_clog storage."  If you've got
significantly more than 50MB of files in there, normally that means that
either you're not running vacuum to clean things up usefully, or there's
an old open transaction still floating around.

> - When, or in what case is  a new clog file produced?

Every 32K transactions.  See http://wiki.postgresql.org/wiki/Hint_Bits for
some clarification about what the clog files actually do.  I recently
collected some notes from this list and from the source code README files
to give a high-level view there of what actually goes on under the hood in
this area.

> - Is there a mechanism that the clog files are recycled? If there is ,
> what is the mechanism?

The old ones should get wiped out by vacuum's freezing mechanism.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: too many clog files

From
Tom Lane
Date:
Greg Smith <gsmith@gregsmith.com> writes:
> On Tue, 2 Sep 2008, Duan Ligong wrote:
>> - Can we controll the maximum number of the clog files?

> The reference Alvaro suggested at
> http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND
> goes over that.  If you reduce autovacuum_freeze_max_age, that reduces the
> expected maximum size of the clog files.

Duan's first post stated that he is running 8.1, which I believe had a
quite different rule for truncating pg_clog --- it definitely has not
got the parameter autovacuum_freeze_max_age.

Too tired to go reread the 8.1 code right now, but don't quote 8.3
docs at him, 'cause they're not too relevant.

            regards, tom lane

Re: too many clog files

From
Guillaume Lelarge
Date:
Greg Smith a écrit :
> [...]
>> - When, or in what case is  a new clog file produced?
>
> Every 32K transactions.

Are you sure about this?

y clog files get up to 262144 bytes. Which means 1000000 transactions'
status: 262144 bytes are 2Mb (mega bits), so if a status is 2 bits, it
holds 1M transactions' status).

AFAICT, 32K transactions' status are available on a single (8KB) page.

Or am I wrong?


--
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

Re: too many clog files

From
Greg Smith
Date:
On Tue, 2 Sep 2008, Guillaume Lelarge wrote:

> AFAICT, 32K transactions' status are available on a single (8KB) page.

You're right, I had that right on the refered to page but mangled it when
writing the e-mail.

> 262144 bytes are 2Mb (mega bits), so if a status is 2 bits, [a clog
> file] holds 1M transactions' status).

Exactly.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: too many clog files

From
Duan Ligong
Date:
Thanks for your reply.

Greg wrote:
> On Tue, 2 Sep 2008, Duan Ligong wrote:
> > - Does Vacuum delete the old clog files?
>
> Yes, if those transactions are all done.  One possibility here is that
> you've got some really long-running transaction floating around that is
> keeping normal clog cleanup from happening.  Take a look at the output
> from "select * from pg_stat_activity" and see if there are any really old
> transactions floating around.

Well, we could not wait so long and just moved the old clog files.
The postgresql system is running well.
But now the size of pg_clog has exceeded 50MB and there
are 457 clog files.
- - - -
[root@set19:AN0101 hydragui]# du -sh pgdata/pg_clog
117M    pgdata/pg_clog
- - - -

My question is:
- How to determine whether there is a long-running transactions
or not based on the output of "select * from pg_stat_activity"?
It seems there is not the start time of transactions.
- How should we deal with it if there is a long-running transactin?
Can we do something to avoid long-running transactions?

The following is  the output of "select * from pg_stat_activity".
#It seems there are no query_start time information.
- - - -
xxxdb=> select * from pg_stat_activity;
 datid |   datname   | procpid | usesysid | usename  | current_query | query_start
-------+-------------+---------+----------+----------+---------------+-------------
 92406 | xxxdb |   17856 |      100 | myname |               |
 92406 | xxxdb |   31052 |      100 | myname |               |
(2 rows)
- - - -
After about 6minutes, I execute it again and the output is
- - - -
xxxdb=> select * from pg_stat_activity;
 datid |   datname   | procpid | usesysid | usename  | current_query | query_start
-------+-------------+---------+----------+----------+---------------+-------------
 92406 | xxxdb |    5060 |      100 |myname |               |
 92406 | xxxdb |    5626 |      100 |myname |               |
(2 rows)

- - - -
#my postgresql version is 8.1


> > - Can we controll the maximum number of the clog files?
>
> The reference Alvaro suggested at
> http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND
> goes over that.  If you reduce autovacuum_freeze_max_age, that reduces the
> expected maximum size of the clog files.  "The default, 200 million
> transactions, translates to about 50MB of pg_clog storage."  If you've got
> significantly more than 50MB of files in there, normally that means that
> either you're not running vacuum to clean things up usefully, or there's
> an old open transaction still floating around.
>
> > - When, or in what case is  a new clog file produced?
>
> Every 32K transactions.  See http://wiki.postgresql.org/wiki/Hint_Bits for
> some clarification about what the clog files actually do.  I recently
> collected some notes from this list and from the source code README files
> to give a high-level view there of what actually goes on under the hood in
> this area.

It seems that one transaction occupies 2bit and 32K transactions should
occupy 8K, which is the size of one page.
The size of each clog file is 256KB.
Is there other information which is in clog files except the Hint_bits?
And Do the other information and Hint_bit of 32K transactions occupy
256KB?

> > - Is there a mechanism that the clog files are recycled? If there is ,
> > what is the mechanism?
>
> The old ones should get wiped out by vacuum's freezing mechanism.

Does Wiping out clog files has something to do with the configuration
except Vacuum? Does  we have to set some parameter to enable
the function of Vacuum's wipping out old clog files?

The following is my postgresql.conf file:
- - - -
tcpip_socket = true
max_connections = 500
port = 5432
shared_buffers = 1000
syslog = 2
log_min_messages = fatal
- - - -
#my postgresql database is very small.

Thanks
Duan

> --
> * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance




Re: too many clog files

From
Matthew Wakeling
Date:
On Fri, 5 Sep 2008, Duan Ligong wrote:
> Well, we could not wait so long and just moved the old clog files.

Congratulations. You have probably just destroyed your database.

Matthew

--
Lord grant me patience, and I want it NOW!

You may need to increase mas_loks_per_trasaction

From
"Jonas Pacheco"
Date:
Considering a quad core server processor, 4 GBs of RAM memory, disk Sata
2.

What is the recommended setting for the parameters:

max_connections:70
max_prepared_transactions?
shared_buffers?
wal_buffers?
max_fsm_relations?
max_fsm_pages?

Atenciosamente,
Jonas Rodrigo


Re: You may need to increase mas_loks_per_trasaction

From
"Claus Guttesen"
Date:
> Considering a quad core server processor, 4 GBs of RAM memory, disk Sata
> 2.
>
> What is the recommended setting for the parameters:
>
> max_connections:70

Depends on how many clients that access the database.

> shared_buffers?

I have mine at 512 MB but I will lower it and see how it affects
performance. I have 16 GB in my server.

> max_fsm_relations?
> max_fsm_pages?

Perform a vacuum analyze verbose and look at the last few lines. This
will tell you whether you need to increase max_fsm_*.

Consider lowering random_page_cost so it favoes indexex more often
than seq. scans.

But if you don't get a decent raid-controller your data will move slow
and tuning will only make a minor difference.

--
regards
Claus

When lenity and cruelty play for a kingdom,
the gentler gamester is the soonest winner.

Shakespeare

Re: too many clog files

From
Alvaro Herrera
Date:
Duan Ligong wrote:

> Greg wrote:
> > On Tue, 2 Sep 2008, Duan Ligong wrote:
> > > - Does Vacuum delete the old clog files?
> >
> > Yes, if those transactions are all done.  One possibility here is that
> > you've got some really long-running transaction floating around that is
> > keeping normal clog cleanup from happening.  Take a look at the output
> > from "select * from pg_stat_activity" and see if there are any really old
> > transactions floating around.
>
> Well, we could not wait so long and just moved the old clog files.
> The postgresql system is running well.

Move the old clog files back where they were, and run VACUUM FREEZE in
all your databases.  That should clean up all the old pg_clog files, if
you're really that desperate.  This is not something that I'd recommend
doing on a periodic basis ...

--
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

Re: too many clog files

From
"Scott Marlowe"
Date:
On Thu, Sep 4, 2008 at 8:58 PM, Duan Ligong <duanlg@nec-as.nec.com.cn> wrote:
> Thanks for your reply.
>
> Greg wrote:
>> On Tue, 2 Sep 2008, Duan Ligong wrote:
>> > - Does Vacuum delete the old clog files?
>>
>> Yes, if those transactions are all done.  One possibility here is that
>> you've got some really long-running transaction floating around that is
>> keeping normal clog cleanup from happening.  Take a look at the output
>> from "select * from pg_stat_activity" and see if there are any really old
>> transactions floating around.
>
> Well, we could not wait so long and just moved the old clog files.
> The postgresql system is running well.
> But now the size of pg_clog has exceeded 50MB and there
> are 457 clog files.

That is absolutely not the thing to do.  Put them back, and do a
dump-restore on the database if you need to save a few hundred megs on
the drive.  Deleting files from underneath postgresql is a great way
to break your database in new and interesting ways which are often
fatal to your data.

Re: You may need to increase mas_loks_per_trasaction

From
Greg Smith
Date:
On Fri, 5 Sep 2008, Jonas Pacheco wrote:

> max_prepared_transactions?

This is covered pretty well by the documentation:
http://www.postgresql.org/docs/current/static/runtime-config-resource.html

There are suggestions for everything else you asked about (and a few more
things you should also set) at
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: too many clog files

From
Duan Ligong
Date:
Rex Mabry wrote:
> Try this query for looking at queries.

Thanks so much for your reply.

> SELECT procpid, usename , (now() - query_start) as age, query_start, l.mode, l.granted
>    FROM pg_stat_activity a LEFT OUTER JOIN pg_locks l ON (a.procpid = l.pid)
>    LEFT OUTER JOIN pg_class c ON (l.relation = c.oid)
>    WHERE current_query NOT IN ('<IDLE>')
>    and a.usename <> 'postgres'
>    ORDER BY pid;

I tried it and the output are as follows:
- - - -
mydb=> SELECT procpid, usename , (now() - query_start)
as age, query_start, l.mode, l.granted FROM pg_stat_activity a
LEFT UTER JOIN pg_locks l ON (a.procpid = l.pid) LEFT
OUTER JOIN pg_class c N (l.relation = c.oid) WHERE
current_query NOT IN ('<IDLE>') and .usename <>
'postgres' ORDER BY pid;
procpid | usename  | age | query_start |      mode       | granted
---------+----------+-----+-------------+-----------------+---------
    4134 | myname |     |             | AccessShareLock | t
    4134 |myname |     |             | AccessShareLock | t
    4134 | myname |     |             | ExclusiveLock   | t
    4134 | myname |     |             | AccessShareLock | t
    4134 | mynamei |     |             | AccessShareLock | t
    4134 | myname |     |             | AccessShareLock | t
    4134 | myname |     |             | AccessShareLock | t
(7 rows)
- - - -
Are these 7 rows the long-running transactions?
If they are, what should we do with them? just killing the processes?
#It seems  that the 7 rows are caused by logining db and executing
the select clause. and they are not long-running transactions, aren't
they? because the procpid changes when I logout and login again.

Thanks
Duan

> --- On Thu, 9/4/08, Duan Ligong <duanlg@nec-as.nec.com.cn> wrote:
>
> From: Duan Ligong <duanlg@nec-as.nec.com.cn>
> Subject: Re: [PERFORM] too many clog files
> To: "Greg Smith" <gsmith@gregsmith.com>, pgsql-performance@postgresql.org
> Date: Thursday, September 4, 2008, 9:58 PM
>
> Thanks for your reply.
>
> Greg wrote:
> > On Tue, 2 Sep 2008, Duan Ligong wrote:
> > > - Does Vacuum delete the old clog files?
> >
> > Yes, if those transactions are all done.  One possibility here is that
> > you've got some really long-running transaction floating around that
> is
> > keeping normal clog cleanup from happening.  Take a look at the output
> > from "select * from pg_stat_activity" and see if there are any
> really old
> > transactions floating around.
>
> Well, we could not wait so long and just moved the old clog files.
> The postgresql system is running well.
> But now the size of pg_clog has exceeded 50MB and there
> are 457 clog files.
> - - - -
> [root@set19:AN0101 hydragui]# du -sh pgdata/pg_clog
> 117M    pgdata/pg_clog
> - - - -
>
> My question is:
> - How to determine whether there is a long-running transactions
> or not based on the output of "select * from pg_stat_activity"?
> It seems there is not the start time of transactions.
> - How should we deal with it if there is a long-running transactin?
> Can we do something to avoid long-running transactions?
>
> The following is  the output of "select * from pg_stat_activity".
> #It seems there are no query_start time information.
> - - - -
> xxxdb=> select * from pg_stat_activity;
>  datid |   datname   | procpid | usesysid | usename  | current_query |
> query_start
> -------+-------------+---------+----------+----------+---------------+-------------
>  92406 | xxxdb |   17856 |      100 | myname |               |
>  92406 | xxxdb |   31052 |      100 | myname |               |
> (2 rows)
> - - - -
> After about 6minutes, I execute it again and the output is
> - - - -
> xxxdb=> select * from pg_stat_activity;
>  datid |   datname   | procpid | usesysid | usename  | current_query |
> query_start
> -------+-------------+---------+----------+----------+---------------+-------------
>  92406 | xxxdb |    5060 |      100 |myname |               |
>  92406 | xxxdb |    5626 |      100 |myname |               |
> (2 rows)
>
> - - - -
> #my postgresql version is 8.1
>
>
> > > - Can we controll the maximum number of the clog files?
> >
> > The reference Alvaro suggested at
> >
> http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND
>
> > goes over that.  If you reduce autovacuum_freeze_max_age, that reduces the
>
> > expected maximum size of the clog files.  "The default, 200 million
> > transactions, translates to about 50MB of pg_clog storage."  If
> you've got
> > significantly more than 50MB of files in there, normally that means that
> > either you're not running vacuum to clean things up usefully, or
> there's
> > an old open transaction still floating around.
> >
> > > - When, or in what case is  a new clog file produced?
> >
> > Every 32K transactions.  See http://wiki.postgresql.org/wiki/Hint_Bits for
>
> > some clarification about what the clog files actually do.  I recently
> > collected some notes from this list and from the source code README files
> > to give a high-level view there of what actually goes on under the hood in
>
> > this area.
>
> It seems that one transaction occupies 2bit and 32K transactions should
> occupy 8K, which is the size of one page.
> The size of each clog file is 256KB.
> Is there other information which is in clog files except the Hint_bits?
> And Do the other information and Hint_bit of 32K transactions occupy
> 256KB?
>
> > > - Is there a mechanism that the clog files are recycled? If there is
> ,
> > > what is the mechanism?
> >
> > The old ones should get wiped out by vacuum's freezing mechanism.
>
> Does Wiping out clog files has something to do with the configuration
> except Vacuum? Does  we have to set some parameter to enable
> the function of Vacuum's wipping out old clog files?
>
> The following is my postgresql.conf file:
> - - - -
> tcpip_socket = true
> max_connections = 500
> port = 5432
> shared_buffers = 1000
> syslog = 2
> log_min_messages = fatal
> - - - -
> #my postgresql database is very small.
>
> Thanks
> Duan
>
> > --
> > * Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD
> >
> > --
> > Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-performance
>
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
>
>
>




Re: too many clog files

From
Duan Ligong
Date:
Alvaro wrote:
> Move the old clog files back where they were, and run VACUUM FREEZE in
> all your databases.  That should clean up all the old pg_clog files, if
> you're really that desperate.  This is not something that I'd recommend
> doing on a periodic basis ...

Thank you for your suggestions.
I tried it with VACUUM FREEZE, but it still does not work.
VACUUM FULL was also be tried, but it doesn't work, either.
The old files were not be deleted.

I suspect there are some configuration items which disable vacuum's
cleaning old clog files, because it seems that vacuum could not
delete old clog files at all.
My configurations are as follows:
- - - -
tcpip_socket = true
max_connections = 500
port = 5432
shared_buffers = 1000
syslog = 2
log_min_messages = fatal
#others are default values.
- - - -

Thanks
Duan

> Duan Ligong wrote:
>
> > Greg wrote:
> > > On Tue, 2 Sep 2008, Duan Ligong wrote:
> > > > - Does Vacuum delete the old clog files?
> > >
> > > Yes, if those transactions are all done.  One possibility here is that
> > > you've got some really long-running transaction floating around that is
> > > keeping normal clog cleanup from happening.  Take a look at the output
> > > from "select * from pg_stat_activity" and see if there are any really old
> > > transactions floating around.
> >
> > Well, we could not wait so long and just moved the old clog files.
> > The postgresql system is running well.
>
> Move the old clog files back where they were, and run VACUUM FREEZE in
> all your databases.  That should clean up all the old pg_clog files, if
> you're really that desperate.  This is not something that I'd recommend
> doing on a periodic basis ...
>
> --
> Alvaro Herrera                                http://www.CommandPrompt.com/
> The PostgreSQL Company - Command Prompt, Inc.
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance




Re: too many clog files

From
"Matt Smiley"
Date:
Hi Duan,

As others have said, you should probably attempt to run pg_dump to export your database.  If that doesn't work,
considerrestoring from backup.  If the dump does work, you can create a clean PGDATA directory (using initdb like when
yousetup your original installation), and create a fresh copy of your database using the dump file.  Then abandon your
potentiallydamaged PGDATA directory. 

For future reference:

 - The "autovacuum" parameter in postgresql.conf is off by default under Postgres 8.1.  You should probably turn it on
toensure regular vacuuming, unless you have your own cronjob to do the vacuuming. 

 - About finding old transactions, there are 2 places you have to look for old transactions.  The usual place is in
pg_stat_activity. The 2nd place is "pg_prepared_xacts", where prepared transactions are listed.  If there's a prepared
transactionin your system, it might explain why your old commit-logs aren't being purged.  The following query shows
bothprepared and normal transactions: 

select
  l.transactionid,
  age(l.transactionid) as age,  /* measured in number of other transactions elapsed, not in terms of time */
  l.pid,
  case when l.pid is null then false else true end as is_prepared,
  a.backend_start,
  p.prepared as time_xact_was_prepared,
  p.gid as prepared_name
from
  pg_locks l
  left outer join pg_stat_activity a on l.pid = a.procpid
  left outer join pg_prepared_xacts p on l.transactionid = p.transaction
where
  l.locktype = 'transactionid'
  and l.mode = 'ExclusiveLock'
  and l.granted
order by age(l.transactionid) desc
;

 transactionid | age | pid  | is_prepared |         backend_start         |    time_xact_was_prepared     |
prepared_name

---------------+-----+------+-------------+-------------------------------+-------------------------------+--------------------------
        316645 |  44 |      | f           |                               | 2008-09-09 00:31:46.724178-07 |
my_prepared_transaction1
        316689 |   0 | 6093 | t           | 2008-09-09 00:40:10.928287-07 |                               |
(2 rows)

Note that unless you run this query as a superuser (e.g. "postgres"), the columns from pg_stat_activity will only be
visiblefor sessions that belong to you.  To rollback this example prepared transaction, you'd type: 
  ROLLBACK PREPARED 'my_prepared_transaction1';

Hope this helps!
Matt



Re: too many clog files

From
"Matt Smiley"
Date:
Alvaro Herrera wrote:
> Move the old clog files back where they were, and run VACUUM FREEZE in
> all your databases.  That should clean up all the old pg_clog files, if
> you're really that desperate.

Has anyone actually seen a CLOG file get removed under 8.2 or 8.3?  How about 8.1?

I'm probably missing something, but looking at src/backend/commands/vacuum.c (under 8.2.9 and 8.3.3), it seems like
vac_truncate_clog()scans through *all* tuples of pg_database looking for the oldest datfrozenxid.  Won't that always be
template0,which as far as I know can never be vacuumed (or otherwise connected to)? 

postgres=# select datname, datfrozenxid, age(datfrozenxid), datallowconn from pg_database order by age(datfrozenxid),
datname; 
     datname      | datfrozenxid |   age    | datallowconn
------------------+--------------+----------+--------------
 template1        |     36347792 |     3859 | t
 postgres         |     36347733 |     3918 | t
 mss_test         |     36347436 |     4215 | t
 template0        |          526 | 36351125 | f
(4 rows)

I looked at several of my 8.2 databases' pg_clog directories, and they all have all the sequentially numbered segments
(0000through current segment).  Would it be reasonable for vac_truncate_clog() to skip databases where datallowconn is
false(i.e. template0)?  Looking back to the 8.1.13 code, it does exactly that: 
                if (!dbform->datallowconn)
                        continue;

Also, Duan, if you have lots of files under pg_clog, you may be burning through transactions faster than necessary.  Do
yourapplications leave autocommit turned on?  And since no one else mentioned it, as a work-around for a small
filesystemyou can potentially shutdown your database, move the pg_clog directory to a separate filesystem, and create a
symlinkto it under your PGDATA directory.  That's not a solution, just a mitigation. 



Re: too many clog files

From
"Kevin Grittner"
Date:
> "Matt Smiley" <mss@rentrak.com> wrote:
> Alvaro Herrera wrote:
>> Move the old clog files back where they were, and run VACUUM FREEZE
in
>> all your databases.  That should clean up all the old pg_clog files,
if
>> you're really that desperate.
>
> Has anyone actually seen a CLOG file get removed under 8.2 or 8.3?

Some of my high-volume databases don't quite go back to 0000, but this
does seem to be a problem.  I have confirmed that VACUUM FREEZE on all
but template0 (which doesn't allow connections) does not clean them
up.  No long running transactions are present.

-Kevin

Re: too many clog files

From
"Scott Marlowe"
Date:
On Wed, Sep 10, 2008 at 8:58 AM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
>> "Matt Smiley" <mss@rentrak.com> wrote:
>> Alvaro Herrera wrote:
>>> Move the old clog files back where they were, and run VACUUM FREEZE
> in
>>> all your databases.  That should clean up all the old pg_clog files,
> if
>>> you're really that desperate.
>>
>> Has anyone actually seen a CLOG file get removed under 8.2 or 8.3?
>
> Some of my high-volume databases don't quite go back to 0000, but this
> does seem to be a problem.  I have confirmed that VACUUM FREEZE on all
> but template0 (which doesn't allow connections) does not clean them
> up.  No long running transactions are present.

I have a pretty high volume server that's been online for one month
and it had somewhere around 53, going back in order to 0000, and it
was recently vacuumdb -az 'ed. Running another one.  No long running
transactions, etc...

Re: too many clog files

From
Tom Lane
Date:
"Scott Marlowe" <scott.marlowe@gmail.com> writes:
> On Wed, Sep 10, 2008 at 8:58 AM, Kevin Grittner
> <Kevin.Grittner@wicourts.gov> wrote:
>> Some of my high-volume databases don't quite go back to 0000, but this
>> does seem to be a problem.  I have confirmed that VACUUM FREEZE on all
>> but template0 (which doesn't allow connections) does not clean them
>> up.  No long running transactions are present.

> I have a pretty high volume server that's been online for one month
> and it had somewhere around 53, going back in order to 0000, and it
> was recently vacuumdb -az 'ed. Running another one.  No long running
> transactions, etc...

The expected behavior (in 8.2 and newer) is to maintain about
autovacuum_freeze_max_age transactions' worth of clog; which is to say
about 50MB at the default settings.  If you've got significantly more
than that then we should look more closely.

I don't remember what the truncation rule was in 8.1, so I can't speak
to the OP's complaint.

            regards, tom lane

Re: too many clog files

From
"Kevin Grittner"
Date:
>>> Tom Lane <tgl@sss.pgh.pa.us> wrote:

> The expected behavior (in 8.2 and newer) is to maintain about
> autovacuum_freeze_max_age transactions' worth of clog; which is to
say
> about 50MB at the default settings.

The active database I checked, where it didn't go all the way back to
0000, had 50 MB of files; so I guess it is working as intended.

It sounds like the advice to the OP that running VACUUM FREEZE on all
databases to clean up the files was off base?

-Kevin

Re: too many clog files

From
"Scott Carey"
Date:
And potentially to tune down the number kept by modifying the appropriate freeze parameter for 8.1 (I'm not sure of the details), so that it keeps perhaps 20MB or so rather than 50MB.

On Wed, Sep 10, 2008 at 10:47 AM, Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote:
>>> Tom Lane <tgl@sss.pgh.pa.us> wrote:

> The expected behavior (in 8.2 and newer) is to maintain about
> autovacuum_freeze_max_age transactions' worth of clog; which is to
say
> about 50MB at the default settings.

The active database I checked, where it didn't go all the way back to
0000, had 50 MB of files; so I guess it is working as intended.

It sounds like the advice to the OP that running VACUUM FREEZE on all
databases to clean up the files was off base?

-Kevin

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Re: too many clog files

From
Alvaro Herrera
Date:
Kevin Grittner escribió:

> It sounds like the advice to the OP that running VACUUM FREEZE on all
> databases to clean up the files was off base?

His responses are not explicit enough to know.

--
Alvaro Herrera                                http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.