Thread: Table corrupt?
Hey, guys... I suppose this probably belongs in questions, but I thought I might get a quicker answer here. I have a table in a customer's database that has gotten quite large via lack of vacuuming (57MB). Queries involving this table started failing in the form of the backend just sitting there chugging away on the CPU (but not disk so much) for hours on end. This began about 24 hours ago, and as of about 12 hours ago, no queries on this table work. I started a vacuum about 3 hours ago, and it has had upper-90s percent CPU usage the whole time, and still hasn't completed. Any ideas on what might be going on here? And, if postgres won't be able to access the table, is there any hope of extracting rows from the raw database file, such that I could reconstruct the table? Please cc responses to me directly, as I'm only on the digest list, and thanks in advance for any advice/help! -Brandon :)
> > Hey, guys... > I suppose this probably belongs in questions, but I thought I might > get a quicker answer here. I have a table in a customer's database > that has gotten quite large via lack of vacuuming (57MB). Queries > involving this table started failing in the form of the backend just > sitting there chugging away on the CPU (but not disk so much) for > hours on end. > This began about 24 hours ago, and as of about 12 hours ago, no > queries on this table work. I started a vacuum about 3 hours ago, and > it has had upper-90s percent CPU usage the whole time, and still > hasn't completed. > Any ideas on what might be going on here? And, if postgres won't > be able to access the table, is there any hope of extracting rows from > the raw database file, such that I could reconstruct the table? > Please cc responses to me directly, as I'm only on the digest list, > and thanks in advance for any advice/help! pg_dump -t tablename, drop and reload? -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian said: > > > Any ideas on what might be going on here? And, if postgres won't > > be able to access the table, is there any hope of extracting rows from > > the raw database file, such that I could reconstruct the table? > > pg_dump -t tablename, drop and reload? > I thought pg_dump got the data out via queries through the backend? (But, then, I could be wrong... please correct me if so...) -Brandon :)
> > Bruce Momjian said: > > > > > Any ideas on what might be going on here? And, if postgres won't > > > be able to access the table, is there any hope of extracting rows from > > > the raw database file, such that I could reconstruct the table? > > > > pg_dump -t tablename, drop and reload? > > > I thought pg_dump got the data out via queries through the backend? > (But, then, I could be wrong... please correct me if so...) > > -Brandon :) > I gets the data out via COPY, which is slightly different than a normal query that does through the parser/optimizer/executor. It is possible you just have a lot of extra data and it is taking time to vacuum. If there is a real problem, I would dump the entire database and reload it. -- Bruce Momjian | 830 Blythe Avenue maillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026 + If your life is a hard drive, | (610) 353-9879(w) + Christ can be your backup. | (610) 853-3000(h)
Bruce Momjian said: > > > > > Bruce Momjian said: > > > > > > > Any ideas on what might be going on here? And, if postgres won't > > > > be able to access the table, is there any hope of extracting rows from > > > > the raw database file, such that I could reconstruct the table? > > > > > > pg_dump -t tablename, drop and reload? > > > > > I thought pg_dump got the data out via queries through the backend? > > (But, then, I could be wrong... please correct me if so...) > > > > -Brandon :) > > > > I gets the data out via COPY, which is slightly different than a normal > query that does through the parser/optimizer/executor. It is possible > you just have a lot of extra data and it is taking time to vacuum. > Hmmm... well, the table may be 57 Meg, but then, the backend running the vacuum has consumed 5 1/2 hours of CPU time so far, and still going strong, so something tells me there may be something deeper. :) > If there is a real problem, I would dump the entire database and reload > it. > Probably good advice, tho the rest of the tables seem to be just fine. *shrug* -Brandon :)