Re: [BUGS] postmaster crashing on semi large tabl - Mailing list pgsql-bugs

From Tom Lane
Subject Re: [BUGS] postmaster crashing on semi large tabl
Date
Msg-id 5340.943343353@sss.pgh.pa.us
Whole thread Raw
In response to postmaster crashing on semi large tabl  (nate <nate@desert-solutions.com>)
List pgsql-bugs
nate <nate@desert-solutions.com> writes:
> I have a table that has only about 6000 rows in it, takes up 44941312
> bytes (in the data/base dir),

? Not in just one file, I hope.  It should be divided into 1-Gb-sized
segments named resume_user, resume_user.1, etc.  If it really is one
file then the problem likely has something to do with file offset
overflow.  If there are multiple files, how big are they exactly?

> I can 'vacuum' the big table, but it won't 'vacuum analyze'.  Here's what
> I get when I do a vacuum:

> DEBUG:  --Relation resume_user--
> DEBUG:  Pages 1145: Changed 0, Reapped 740, Empty 0, New 0; Tup 7150: Vac
> 0, Keep/VTL 0/0, Crash 0, UnUsed 4349, MinLen 56, MaxLen 6148; Re-using:
> Free/Avail. Space 166708/160508; EndEmpty/Avail. Pages 0/449. Elapsed 0/0
> sec.
> DEBUG:  Rel resume_user: Pages: 1145 --> 1145; Tuple(s) moved: 0. Elapsed
> 4/0 sec.

That's even more interesting, because it says "vacuum" thinks there's
only 1145 disk blocks (about 9Mb, assuming you stuck to the standard
8K block size) in the table.  Which is fairly reasonable for a table
with 6000 rows in it, whereas 4 gig is right out.  Why isn't vacuum
noticing all the remaining space?

I hope you had a backup, because I'm afraid that table is hosed.  But
we should try to figure out how it got that way, so we can prevent it
from happening again.

            regards, tom lane

pgsql-bugs by date:

Previous
From: nate
Date:
Subject: postmaster crashing on semi large tabl
Next
From: Bastian Kleineidam
Date:
Subject: Installation glitches