Re: vacuum analyze corrupts database - Mailing list pgadmin-hackers

From Michael Brusser
Subject Re: vacuum analyze corrupts database
Date
Msg-id DEEIJKLFNJGBEMBLBAHCAEGLDDAA.michael@synchronicity.com
Whole thread Raw
In response to vacuum analyze corrupts database?  (Michael Brusser <michael@synchronicity.com>)
List pgadmin-hackers
... ...
> Our server is running vacuum analyze every night. We had problems
> with this before, when we used v.7.2.1. Now with v.7.3.2 it appears again.
> After nightly maintenance application isn't functioning. It looks that
> corruption occurs (at least) on one table.
... ...

I was able to narrow down the problem by dropping all but one
column from the table in question. What I have now is a table
with a single varchar column and just over three hundred records on it.
Running vacuum analyze on this table results in some sort of corruption.
I'm running Pg 7.3.2 on Solaris. There's no explicit settings for vacuum
in postgresql.conf

Attached is the dump of this table. The problem can be easily reproduced.
Load the dump into your database, it'll create table "nla"

syncdb=# select count (*) from nla where note_url LIKE 'sync:///FUNCTREE%' ;
 count
-------
   121
(1 row)

syncdb=# vacuum analyze nla;
VACUUM

-- --- now repeat the same query ---
syncdb=# select count (*) from nla where note_url LIKE 'sync:///FUNCTREE%' ;
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: WARNING:  Message
from PostgreSQL backend:
        The Postmaster has informed me that some other backend
        died abnormally and possibly corrupted shared memory.
        I have rolled back the current transaction and am
        going to terminate your database system connection and exit.
        Please reconnect to the database system and repeat your query.
Failed.

Now, if I reconnect and delete the nla's record from pg_statistic
I can run the query again.

Problem could be data related, I tried to poke around and update
all records beginning with "sync:///FUNCTREE" to something different,
then problem may go away.

Could someone from the Postgres team comment on this, please.
Mike.



Attachment

pgadmin-hackers by date:

Previous
From: "Dave Page"
Date:
Subject: Re: pga3: problem
Next
From: Andreas Pflug
Date:
Subject: Re: pga3: problem