Thread: database question

database question

From
john.crawford@sirsidynix.com
Date:
Hi all I have been experiencing some strange behaviour on my postgres
DB.  I am VERY new to PG so bear with me as what I am going to ask is
all probably very basic to you guys.
First off over the last couple of weeks we have been seeing in the
dir /var/lib/pgsql/data/base/16450 some large file creations, so for
example

-rw------- 1 postgres postgres 1073741824 Sep 29 15:15 2683
-rw------- 1 postgres root     1073741824 Sep 29 15:15 2613.77
-rw------- 1 postgres root     1073741824 Sep 29 15:15 2613.83
-rw------- 1 postgres root       65347584 Sep 29 15:16 2613.88
-rw------- 1 postgres root     1073741824 Sep 29 15:16 2613.86
-rw------- 1 postgres root     1073741824 Sep 29 15:17 2613.82
-rw------- 1 postgres root     1073741824 Sep 29 15:17 2613.81
-rw------- 1 postgres postgres  380346368 Sep 29 15:17 16451.1
-rw------- 1 postgres postgres  217710592 Sep 29 15:18 33820
-rw------- 1 postgres root      119046144 Sep 29 15:18 2683.1
-rw------- 1 postgres root     1073741824 Sep 29 15:18 2613.84

What are these files and why have they suddenly started to be created
and why so large?

Re: database question

From
"Scott Marlowe"
Date:
On Mon, Sep 29, 2008 at 8:21 AM,  <john.crawford@sirsidynix.com> wrote:
> Hi all I have been experiencing some strange behaviour on my postgres
> DB.  I am VERY new to PG so bear with me as what I am going to ask is
> all probably very basic to you guys.
> First off over the last couple of weeks we have been seeing in the
> dir /var/lib/pgsql/data/base/16450 some large file creations, so for
> example
>
> -rw------- 1 postgres postgres 1073741824 Sep 29 15:15 2683
> -rw------- 1 postgres root     1073741824 Sep 29 15:15 2613.77
> -rw------- 1 postgres root     1073741824 Sep 29 15:15 2613.83
>
> What are these files and why have they suddenly started to be created
> and why so large?

PostgreSQL automatically splits table files into 1G chunks so it can
run on OSes with file size limits.  These are part of the table
identified by the oid 2613.  You can find it by looking in pg_class.
Run psql -E and do \d and you'll see the queries that psql uses to
create its output, and you can muck about with them to see which are
which.

Also, the contrib module oid2name will tell you these things from the
shell / CLI.

Re: database question

From
Greg Smith
Date:
On Mon, 29 Sep 2008, john.crawford@sirsidynix.com wrote:

> What are these files and why have they suddenly started to be created
> and why so large?

They're the contents of the database and they get created every time there
is another 1GB worth of data in there.  Note that the database will use
more space if data is being UPDATEd and you don't vacuum it regularly.
Without the vacuum going it's as if you'd added a new row instead when you
update something.

While it's possible to decode what those files are by using oid2name or
pg_class, what you probably want to know instead is what the big tables
and indexes in your database are to figure out what is gobbling space.
The script at http://wiki.postgresql.org/wiki/Disk_Usage will give you
that.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

Re: database question

From
Tom Lane
Date:
"Scott Marlowe" <scott.marlowe@gmail.com> writes:
> On Mon, Sep 29, 2008 at 8:21 AM,  <john.crawford@sirsidynix.com> wrote:
>> -rw------- 1 postgres root     1073741824 Sep 29 15:15 2613.77
>> -rw------- 1 postgres root     1073741824 Sep 29 15:15 2613.83
>>
>> What are these files and why have they suddenly started to be created
>> and why so large?

> PostgreSQL automatically splits table files into 1G chunks so it can
> run on OSes with file size limits.  These are part of the table
> identified by the oid 2613.  You can find it by looking in pg_class.

Actually relfilenode, not oid, is the thing to look at.  But a table
with such a small relfilenode number must be a system catalog, and a
quick look shows that in any recent PG version it's pg_largeobject.

So the answer is you've got something that's gone hog-wild on creating
large objects and not deleting them; or maybe the application *is*
deleting them but pg_largeobject isn't getting vacuumed.

            regards, tom lane

Re: database question

From
john.crawford@sirsidynix.com
Date:
>
> So the answer is you've got something that's gone hog-wild on creating
> large objects and not deleting them; or maybe the application *is*
> deleting them but pg_largeobject isn't getting vacuumed.
>
>                         regards, tom lane
Hi all, thanks for the advice.  I ran the script for largefiles and
the largest is 3Gb followed by 1Gb then followed by another 18 files
that total about 3Gb between them.  So about 7Gb in total of a 100Gb
partition that has 99Gb used.  All this is in the data/base/16450
directory in these large 1Gb files.  If I look in the logs for
Postgres I can see a vacuum happening every 20 minutes, in that it
says "autovacuum: processing database "db name" but nothing else.  How
do I know if the vacuum is actually doing anything?
What is pg_largeobjects and what can I check with it (sorry did say I
was a real novice).
Really appreciate your help guys.
John