Re: pg6.4.2 eating records... - Mailing list pgsql-hackers
From | Matthew Hagerty |
---|---|
Subject | Re: pg6.4.2 eating records... |
Date | |
Msg-id | 4.3.1.2.20000601013501.00b18790@127.0.0.1 Whole thread Raw |
In response to | Re: pg6.4.2 eating records... (Tom Lane <tgl@sss.pgh.pa.us>) |
Responses |
Re: pg6.4.2 eating records...
|
List | pgsql-hackers |
At 12:51 AM 6/1/00 -0400, Tom Lane wrote: >Matthew Hagerty <matthew@venux.net> writes: > > I have a question about pg6.4.2, I know it is old but upgrading is not an > > option at this point in time (not my decision). :( > >Try to persuade your PHB to reconsider ;-) I am, believe me, I am! :) > > Every night I run the following: > > > <sql to drop all indexes> > > vacuum; > > <sql to create all indexes> > > vacuum analyze; > >This is a little bizarre, to say the least. It should be > > <sql to drop all indexes> > > vacuum analyze; > > <sql to create all indexes> > >There's no point in running two vacuums, and there's even less point >in dropping/recreating indexes around the vacuum only to proceed to >run another vacuum with the indexes in place. Well, I do it that way based on your feedback, Tom. ;) You said once that you should drop the indexes prior to running vacuum, then another time you said vacuum analyze should be run with indexes in place. So I do both. Is this bad? > > select a.app_id from applicants as a where a.app_id not in > > (select n.note_id from appnotes as n where n.note_id=a.app_id); > > app_id > > ------ > > 27555 > > 26446 > > 27556 > > 1734 > > 26502 > > 26246 > > (6 rows) > >Ugh. It would be interesting to see EXPLAIN of this query run just >before and just after the vacuum sequence. If it's failing just >after the nightly vacuum, what makes it start working again by the >time of the next one? Actually it does not fix itself, I add new records everyday to the appnotes table so the application does not break. I know I know. >In all honesty, you are not likely to attract a whole lot of interest >in fixing 6.4.* bugs at this late date. My own interest will only >extend as far as making sure the bug is not still there in 7.0... I guess I'm not really looking for a fix, I was just wondering if this was a known problem with 6.4.2 and/or if there was maybe a patch that fixed it or something. I need to tell my project manager something intelligent so he can communicate to the client and get them to spend the money to have the database upgraded to 7.0. > > What happened? Did vacuum eat them or something? The records are always > > just missing out of the appnotes table. > >My guess is the records are still there, but due to some bug the >specific query you are using fails to find them. Postgres has many >faults, but losing data completely isn't usually one of them. > > regards, tom lane Thanks Tom! Matthew
pgsql-hackers by date: