Re: database question - Mailing list pgsql-general

From john.crawford@sirsidynix.com
Subject Re: database question
Date
Msg-id 22b026dd-e233-44c0-8098-1719b296fd01@x41g2000hsb.googlegroups.com
Whole thread Raw
In response to database question  (john.crawford@sirsidynix.com)
List pgsql-general
>
> So the answer is you've got something that's gone hog-wild on creating
> large objects and not deleting them; or maybe the application *is*
> deleting them but pg_largeobject isn't getting vacuumed.
>
>                         regards, tom lane
Hi all, thanks for the advice.  I ran the script for largefiles and
the largest is 3Gb followed by 1Gb then followed by another 18 files
that total about 3Gb between them.  So about 7Gb in total of a 100Gb
partition that has 99Gb used.  All this is in the data/base/16450
directory in these large 1Gb files.  If I look in the logs for
Postgres I can see a vacuum happening every 20 minutes, in that it
says "autovacuum: processing database "db name" but nothing else.  How
do I know if the vacuum is actually doing anything?
What is pg_largeobjects and what can I check with it (sorry did say I
was a real novice).
Really appreciate your help guys.
John

pgsql-general by date:

Previous
From: Russell Smith
Date:
Subject: Re: ODBC driver crash
Next
From: r_musta
Date:
Subject: Re: Counting unique rows as an aggregate.