---------- Forwarded message ----------
Date: Sat, 30 Jan 1999 10:14:22 -0800
From: Kenric Ashe <kenric@OAKTREE.com>
To: 'astro' <astro@disturbance.ml.org>
Subject: RE: [GENERAL] Inaccessible table?? (fwd)
Yeah, sounds like the index for that table is corrupted.
Richard at Superb.net actually gave me an idea which I should have thought
of myself. Haven't tried it yet, but when you have a problem with an index,
you should be able to just drop the index and then recreate it. I guess I
hadn't thought of that because I thought that's exactly what the Rebuild
button in Enterprise Manager is supposed to do, but I'd be willing to bet
twenty bucks that rebuilding it manually is what it's gonna take to get the
job done.
So I'd recommend that Thomas rebuild the index on that postgres table he's
talking about, if that's possible.
-Kenric
> -----Original Message-----
> From: astro [mailto:astro@disturbance.ml.org]
> Sent: Saturday, January 30, 1999 10:14 AM
> To: kenric@oaktree.com
> Subject: [GENERAL] Inaccessible table?? (fwd)
>
>
> I know it's an entirely different platform, but this sounds
> eerily similar
> to the problem you've been having w/ superb...?
>
> Bry
>
> __________________________________
>
> Bryan White
> astro@disturbance.ml.org
> __________________________________
>
>
>
> ---------- Forwarded message ----------
> Date: Sat, 30 Jan 1999 10:39:31 -0500
> From: Thomas Reinke <reinke@e-softinc.com>
> To: pgsql-general@postgreSQL.org
> Subject: [GENERAL] Inaccessible table??
>
> Hi folks...need some help with data recovery. I've been using
> postgres somewhat successfully (success - reliability problems)
> for about a year. Today I have run into a major problem:
> I have a 1.6 million record table, and I cannot get access to
> all of the data in the table.
>
> Specifically:
> 1. The table is visible to clients - i.e. you can attempt
> a select, pg_dump, etc.
> 2. If a pg_dump is attempted on the table, only the first
> 761 rows are dumped. Thereafter, the server task spins
> forever chewing up CPU cycles and never dumps an
> additional record. In once case (prior to me killing the
> task) I witnessed it consuming 4 hours of CPU time.
> 3. vacuum does the same...If I vacuum the db, it vacuums
> almost everything but this table (i.e. it gets stuck
> on what I think is this table). If a vacuum the table
> directly, the server task spins endlessly.
> 4. Select statements hang forever (same effect)
>
> All other tables behave "normally".
>
> I have some rather important data collected in this table
> over the past 8 days (since the last backup), and would like
> to try to recover it if at all possible. Any ideas?
>
> Thomas
>