Bruce Momjian said:
>
> >
> > Bruce Momjian said:
> > >
> > > > Any ideas on what might be going on here? And, if postgres won't
> > > > be able to access the table, is there any hope of extracting rows from
> > > > the raw database file, such that I could reconstruct the table?
> > >
> > > pg_dump -t tablename, drop and reload?
> > >
> > I thought pg_dump got the data out via queries through the backend?
> > (But, then, I could be wrong... please correct me if so...)
> >
> > -Brandon :)
> >
>
> I gets the data out via COPY, which is slightly different than a normal
> query that does through the parser/optimizer/executor. It is possible
> you just have a lot of extra data and it is taking time to vacuum.
>
Hmmm... well, the table may be 57 Meg, but then, the backend
running the vacuum has consumed 5 1/2 hours of CPU time so far, and
still going strong, so something tells me there may be something
deeper. :)
> If there is a real problem, I would dump the entire database and reload
> it.
>
Probably good advice, tho the rest of the tables seem to be just
fine. *shrug*
-Brandon :)