Thread: vacuumlo fails pgsql ver 8.3

vacuumlo fails pgsql ver 8.3

From
Pat Heuvel
Date:
Gday all,

I have a large database with many large objects, linked to a single table.

I have been trying to backup the database so I can migrate to a later version, but the backup has been failing due to problems within pg_largeobject. I am not surprised at these errors, because the server is not protected by UPS and has had several power outages (I'm working on this with the database owner!) over the past few years.

Recently I tried to perform a vacuumlo, as I thought this might clear up the problems, and the process fails after a long time with the following log:

F:\PostgreSQL\8.3\bin>vacuumlo -U xxx -W xxx
Password:

Failed to remove lo 317198793: ERROR:  large object 317198793 does not exist

Failed to remove lo 4065905288: ERROR:  current transaction is aborted, commands
 ignored until end of transaction block

Failed to remove lo 0: ERROR:  current transaction is aborted, commands ignored
until end of transaction block

F:\PostgreSQL\8.3\bin>

When I added the -v option, there were many "removing lo xxxxx" messages before the above messages appeared. I have previously tried to reindex pg_largeobject, but that process failed as well.

Can someone offer please offer some advice on how to resolve this?

TIA.

Pat Heuvel
(in Australia)

Re: vacuumlo fails pgsql ver 8.3

From
Tom Lane
Date:
Pat Heuvel <pheuvel@tpg.com.au> writes:
> [ vacuumlo fails ]
> When I added the -v option, there were many "removing lo xxxxx" messages
> before the above messages appeared. I have previously tried to reindex
> pg_largeobject, but that process failed as well.

You need to get the index consistent before trying vacuumlo; that
program is not designed to deal with inconsistent catalogs.

What exactly happens when you try to reindex pg_largeobject?

            regards, tom lane

Re: vacuumlo fails pgsql ver 8.3

From
Pat Heuvel
Date:
On 6/02/2012 4:39 AM, Tom Lane wrote:
Pat Heuvel <pheuvel@tpg.com.au> writes:
[ vacuumlo fails ]
When I added the -v option, there were many "removing lo xxxxx" messages 
before the above messages appeared. I have previously tried to reindex 
pg_largeobject, but that process failed as well.
You need to get the index consistent before trying vacuumlo; that
program is not designed to deal with inconsistent catalogs.

What exactly happens when you try to reindex pg_largeobject?
		regards, tom lane

ERROR:  could not create unique index "pg_largeobject_loid_pn_index"
DETAIL:  Table contains duplicated values.

********** Error **********

ERROR: could not create unique index "pg_largeobject_loid_pn_index"
SQL state: 23505
Detail: Table contains duplicated values.

Re: vacuumlo fails pgsql ver 8.3

From
Tom Lane
Date:
Pat Heuvel <pheuvel@tpg.com.au> writes:
> On 6/02/2012 4:39 AM, Tom Lane wrote:
>> What exactly happens when you try to reindex pg_largeobject?

> ERROR:  could not create unique index "pg_largeobject_loid_pn_index"
> DETAIL:  Table contains duplicated values.

Could be worse.  What you'll need to do is look through the
pg_largeobject catalog for duplicated (loid, pageno) values, and
manually DELETE the redundant rows, or else reassign them new OIDs
if you want to keep the data.

A tip for issuing the removal commands is to use the CTID column to
distinguish otherwise-identical rows, ie you could do something
like
    select ctid, loid, pageno from pg_largeobject
    where (loid, pageno) in (select loid, pageno from pg_largeobject
                group by 1, 2 having count(*) > 1);
    ... examine results ...
    delete from pg_largeobject where ctid = '...';

I believe you'll need to do any direct DELETE or UPDATE on the
catalog as superuser.

            regards, tom lane

Re: vacuumlo fails pgsql ver 8.3

From
Pat Heuvel
Date:
On 7/02/2012 3:48 AM, Tom Lane wrote:
Pat Heuvel <pheuvel@tpg.com.au> writes:
On 6/02/2012 4:39 AM, Tom Lane wrote:
What exactly happens when you try to reindex pg_largeobject?
ERROR:  could not create unique index "pg_largeobject_loid_pn_index"
DETAIL:  Table contains duplicated values.
Could be worse.  What you'll need to do is look through the
pg_largeobject catalog for duplicated (loid, pageno) values, and
manually DELETE the redundant rows, or else reassign them new OIDs
if you want to keep the data.

A tip for issuing the removal commands is to use the CTID column to
distinguish otherwise-identical rows, ie you could do something
likeselect ctid, loid, pageno from pg_largeobjectwhere (loid, pageno) in (select loid, pageno from pg_largeobject			group by 1, 2 having count(*) > 1);... examine results ...delete from pg_largeobject where ctid = '...';

I believe you'll need to do any direct DELETE or UPDATE on the
catalog as superuser.
		regards, tom lane


Thanks Tom, I'll give that a try.