Thread: Vacuum Error

Vacuum Error

From
Dave Smith
Date:
I am running 7.2 and when doing a vacuum I am getting the following
error....

ERROR:  Cannot insert a duplicate key into unique index
pg_statistic_relid_att_index


Where do I start to fix this?

--
Dave Smith
CANdata Systems Ltd
416-493-9020


Re: Vacuum Error

From
Martin Marques
Date:
Mensaje citado por Dave Smith <dave.smith@candata.com>:

> I am running 7.2 and when doing a vacuum I am getting the following
> error....
>
> ERROR:  Cannot insert a duplicate key into unique index
> pg_statistic_relid_att_index
>
>
> Where do I start to fix this?

I'm not 100% about this working, but I would try to rebuild the index:

REINDEX pg_statistic_relid_att_index

You have to be the postgres superuser to do this.

--
select 'mmarques' || '@' || 'unl.edu.ar' AS email;
---------------------------------------------------------
Martín Marqués          |   Programador, DBA
Centro de Telemática    |     Administrador
               Universidad Nacional
                    del Litoral
---------------------------------------------------------

Re: Vacuum Error

From
Tom Lane
Date:
Dave Smith <dave.smith@candata.com> writes:
> I am running 7.2 and when doing a vacuum I am getting the following
> error....

> ERROR:  Cannot insert a duplicate key into unique index
> pg_statistic_relid_att_index

Hmm, if it were a slightly newer version I'd be interested in how you
got into this state, but since it's 7.2 I'll write it off as an old
bug.  The easiest way out, seeing that pg_statistic is all derived data,
is just
    DELETE FROM pg_statistic;
    (possibly VACUUM FULL pg_statistic here)
    re-ANALYZE everything
You should try the nearby suggestion to REINDEX first, but I'm betting
that that doesn't help.

            regards, tom lane

Re: Vacuum Error

From
"scott.marlowe"
Date:
On Mon, 12 Jan 2004, Tom Lane wrote:

> Dave Smith <dave.smith@candata.com> writes:
> > I am running 7.2 and when doing a vacuum I am getting the following
> > error....
>
> > ERROR:  Cannot insert a duplicate key into unique index
> > pg_statistic_relid_att_index
>
> Hmm, if it were a slightly newer version I'd be interested in how you
> got into this state, but since it's 7.2 I'll write it off as an old
> bug.  The easiest way out, seeing that pg_statistic is all derived data,
> is just
>     DELETE FROM pg_statistic;
>     (possibly VACUUM FULL pg_statistic here)
>     re-ANALYZE everything
> You should try the nearby suggestion to REINDEX first, but I'm betting
> that that doesn't help.

Didn't this happen with parallel vacuum / analyzes running in the olden
days?


Re: Vacuum Error

From
"Andrew Bartley"
Date:
I believe this error usually comes about due to OID wrapping.

I have experienced this error many times.  But not once since I changed all
tables definitions to "without oids".

The Fix Tom has suggested bellow is only temporary.  You will need to back
up your data base and reload.  The long term solution is to change every
table to "without oids", that is if your application is not using them.

The way I did it was to edit the output of PG_DUMP and make a global change
to the text changing every create table statement to include "without oids".
If any of your functions or client side code use temp tables, they also need
changing.

The longer you leave it the worse it gets.

Good luck

Thanks

Andrew Bartley

-----Original Message-----
From: pgsql-general-owner@postgresql.org
[mailto:pgsql-general-owner@postgresql.org]On Behalf Of Tom Lane
Sent: Tuesday, 13 January 2004 9:31 AM
To: Dave Smith
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Vacuum Error


Dave Smith <dave.smith@candata.com> writes:
> I am running 7.2 and when doing a vacuum I am getting the following
> error....

> ERROR:  Cannot insert a duplicate key into unique index
> pg_statistic_relid_att_index

Hmm, if it were a slightly newer version I'd be interested in how you
got into this state, but since it's 7.2 I'll write it off as an old
bug.  The easiest way out, seeing that pg_statistic is all derived data,
is just
    DELETE FROM pg_statistic;
    (possibly VACUUM FULL pg_statistic here)
    re-ANALYZE everything
You should try the nearby suggestion to REINDEX first, but I'm betting
that that doesn't help.

            regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings




Re: Vacuum Error

From
Tom Lane
Date:
"scott.marlowe" <scott.marlowe@ihs.com> writes:
>> Dave Smith <dave.smith@candata.com> writes:
>>> I am running 7.2 and when doing a vacuum I am getting the following
>>> error....
>>
>>> ERROR:  Cannot insert a duplicate key into unique index
>>> pg_statistic_relid_att_index

> Didn't this happen with parallel vacuum / analyzes running in the olden
> days?

I don't think it could happen in 7.2, because that release took
ExclusiveLock on pg_statistic while inserting rows.

Later releases use a weaker lock (to permit concurrent ANALYZE of
different tables) and are subject to "concurrent update" errors if
you try to run two ANALYZEs of the same table at the same time.
But it doesn't sound like Dave is doing that, anyway.

            regards, tom lane

Re: Vacuum Error

From
Tom Lane
Date:
"Andrew Bartley" <abartley@evolvosystems.com> writes:
> I believe this error usually comes about due to OID wrapping.

No; an OID collision would have occurred when you tried to create a
table.  If two tables are present in pg_class then they have different
OIDs, and shouldn't have any conflicts in pg_statistic.

Since my last message I've thought of a possibly plausible explanation:
the bt_moveright bug that was isolated just a few months ago.  If two
backends were analyzing some tables (not necessarily the same table) at
the same time, and one of them caused the first root-page split in
pg_statistic_relid_att_index, it would be possible for the other one to
miss spotting an existing row for the OID/attnum it wanted to insert.
I think this could lead to two valid entries for the same OID/attnum in
pg_statistic, and consequently a persistent error on every subsequent
attempt to analyze that table.

This seems moderately plausible because pg_statistic_relid_att_index
would fit in a single btree page up till about 300-400 entries, which
is about right for a moderate-sized database (I see 299 pg_statistic
entries in the current regression database, for example).  First split
could easily happen in a database that had been running for awhile.

The relevant CVS log entry is

2003-07-29 18:18  tgl

    * src/backend/access/nbtree/: nbtsearch.c (REL7_3_STABLE),
    nbtsearch.c (REL7_2_STABLE), nbtsearch.c: Fix longstanding error in
    _bt_search(): should moveright at top of loop not bottom.
    Otherwise we fail to moveright when the root page was split while
    we were "in flight" to it.  This is not a significant problem when
    the root is above the leaf level, but if the root was also a leaf
    (ie, a single-page index just got split) we may return the wrong
    leaf page to the caller, resulting in failure to find a key that is
    in fact present.  Bug has existed at least since 7.1, probably
    forever.

(Note that although the patch was committed into 7.2 series, there has
been no 7.2 release since then.  You could pull REL7_2_STABLE tip if you
wanted to build a 7.2-series server with this fix in place.)

            regards, tom lane

Re: Vacuum Error

From
Kragen Sitaker
Date:
On Mon, Jan 12, 2004 at 06:20:23PM -0500, Tom Lane wrote:
> "Andrew Bartley" <abartley@evolvosystems.com> writes:
> > I believe this error usually comes about due to OID wrapping.
>
> No; an OID collision would have occurred when you tried to create a
> table.  If two tables are present in pg_class then they have different
> OIDs, and shouldn't have any conflicts in pg_statistic.

How would that OID collision manifest?  Do you think the error message
might look similar?

Re: Vacuum Error

From
Tom Lane
Date:
Kragen Sitaker <kragen+pgsql@airwave.com> writes:
> On Mon, Jan 12, 2004 at 06:20:23PM -0500, Tom Lane wrote:
>> No; an OID collision would have occurred when you tried to create a
>> table.  If two tables are present in pg_class then they have different
>> OIDs, and shouldn't have any conflicts in pg_statistic.

> How would that OID collision manifest?  Do you think the error message
> might look similar?

Similar, but referring to pg_class_oid_index.

            regards, tom lane