Errors while vacuuming large tables - Mailing list pgsql-admin

From Jeff Boes
Subject Errors while vacuuming large tables
Date
Msg-id aoemt6$1c9u$1@news.hub.org
Whole thread Raw
Responses Re: Errors while vacuuming large tables
List pgsql-admin
We expire rows by a datestamp from a few fairly large tables in our
schema (running 7.2.1).

Table A: 140 Krows, 600 MB
Table B: 100 Krows, 2.7 GB
Table C: 140 Krows, 2.7 GB
Table D: 3.2 Mrows, 500 MB

so that something like 15-20% of each table is deleted at a crack (done on
a weekend, of course).  After the deletions, a VACUUM FULL is performed
on each of these tables.  Recently, we get this message quite often on
table A:

ERROR:  Parent tuple was not found

which I'm led to believe by things I've read here and elsewhere is caused
by a bug in PostgreSQL having to do with rows marked as read-locked or
something.  I hope this gets repaired soon, because it's annoying not to
be able to recover the space automatically on this table.

But this weekend, we got a different set of errors:

ERROR:  cannot open segment 1 of relation table_D (target
block 2337538109): No such file or directory

and for table B:

NOTICE:  Child itemid in update-chain marked as unused - can't continue
repair_frag
ERROR:  cannot open segment 3 of relation pg_toast_51207070 (target
block 2336096317): No such file or directory

What's the remedy to keep this from happening?  We have an Apache
mod_perl installation running queries against these tables; could an open
read-only transaction cause problems like this?

--
Jeff Boes                                      vox 616.226.9550 ext 24
Database Engineer                                     fax 616.349.9076
Nexcerpt, Inc.                                 http://www.nexcerpt.com
           ...Nexcerpt... Extend your Expertise

pgsql-admin by date:

Previous
From: Tom Lane
Date:
Subject: Re: Statistic collector too many fork
Next
From: Jeff
Date:
Subject: Re: Multiple backends on a single physical database