Thread: Vacuum causing stange changes in DB

Vacuum causing stange changes in DB

From
andrew@ernie.2sheds.de
Date:
Hi!

What does Vacuumdb actually do!?

I have a perl script which inserts records based on certain
critera. I have noticed that if I do a vacuum WHILE my script
is running, that I get a message:

DBD::Pg::st execute failed: ERROR:  Unable to locate type oid 150 in catalog

I am starting postgres with '-i -B 4096 -o -F' options.

The other question is there any other way to get more performace
out of postgres? Vacuum seems to hang for a very long time on a
table with about 120,000 records (3 indexes), and causes all access
to this table to stop. I notice this slowdown when the postmaster
has reached its maximum memory limit... Before that it seems to
run quickly...

Regards

Andrew

Re: Vacuum causing stange changes in DB

From
Tom Lane
Date:
andrew@ernie.2sheds.de writes:
> I have a perl script which inserts records based on certain
> critera. I have noticed that if I do a vacuum WHILE my script
> is running, that I get a message:
> DBD::Pg::st execute failed: ERROR:  Unable to locate type oid 150 in catalog

Hmm ... considering that OID 150 is not a type at all (it's the function
int4ge), this would seem to indicate that garbage data is being picked
up and fed to a catalog lookup.  I suspect that what you are seeing is
an example of the infamous catalog-cache-entry-dropped-while-in-use
problem, and that the intended type-OID value is being picked up from a
catalog cache row that isn't there anymore due to vacuum having forced
it to be invalidated.

How reproducible is the behavior --- do you get an error consistently,
or only sometimes?  Do you always see exactly this same message, or does
it vary?  The less reproducible it is, the more I'd tend to think it's
the entry-drop bug.

The bug is fixed (at long last) for 7.1, but in prior versions the only
good workaround is not to vacuum system tables while there are other
active backends.

            regards, tom lane