>
> Hey, guys...
> I suppose this probably belongs in questions, but I thought I might
> get a quicker answer here. I have a table in a customer's database
> that has gotten quite large via lack of vacuuming (57MB). Queries
> involving this table started failing in the form of the backend just
> sitting there chugging away on the CPU (but not disk so much) for
> hours on end.
> This began about 24 hours ago, and as of about 12 hours ago, no
> queries on this table work. I started a vacuum about 3 hours ago, and
> it has had upper-90s percent CPU usage the whole time, and still
> hasn't completed.
> Any ideas on what might be going on here? And, if postgres won't
> be able to access the table, is there any hope of extracting rows from
> the raw database file, such that I could reconstruct the table?
> Please cc responses to me directly, as I'm only on the digest list,
> and thanks in advance for any advice/help!
pg_dump -t tablename, drop and reload?
--
Bruce Momjian | 830 Blythe Avenue
maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026
+ If your life is a hard drive, | (610) 353-9879(w)
+ Christ can be your backup. | (610) 853-3000(h)