problems with large table - Mailing list pgsql-general

From Mike Charnoky
Subject problems with large table
Date
Msg-id 46E808DC.7030400@nextbus.com
Whole thread Raw
Responses Re: problems with large table
List pgsql-general
Hi,

I'm using PostgreSQL 8.1.8 and am having trouble with a table which
contains a large amount of data.  Data is constantly being inserted into
the table, roughly a million inserts per hour at peak.  The table
currently has about 100 million entries which take up 14G of space (24G
with indices).

The problem in nutshell: I noticed that certain queries were
excruciatingly slow, despite the use of an index.  A vacuum analyze of
the table would not complete (despite running for 2 days).  A reindex
also failed to complete after one day.

The details: I was trying to perform a count(*) based on a timestamp
field in the table (which is indexed).  An EXPLAIN ANALYZE showed a high
cost even though an index scan was used.  I tried to VACUUM ANALYZE the
table, thinking this might help.  Yes, autovacuum is turned on, but
since pg8.1 does not store info about when a table was last vacuumed, I
decided to run this manually.  After several hours, the vacuum did not
complete.  So, I disabled the process which was writing to this table
and tried "set vacuum_cost_delay=0" before vacuuming.  After two days,
the vacuum did not complete, so I stopped it and tried to reindex the
table, thinking that indices were corrupted.  This also failed to
complete after one day.

At this point, I'm at a loss.  I've searched the archives for similar
problems, but none of the suggestions have worked.  Is the data in this
table corrupted?  Why are both vacuum and reindex failing to complete?
Is there some sort of fine-tuning I should pay attention to?  Any help
is much appreciated.


Mike

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: update problem in partitioned tables
Next
From: "Joshua D. Drake"
Date:
Subject: Re: problems with large table