Re: Can't import large objects in most recent cvs (2002 - Mailing list pgsql-hackers

From Ron Snyder
Subject Re: Can't import large objects in most recent cvs (2002
Date
Msg-id F888C30C3021D411B9DA00B0D0209BE803BB9A36@cvo-exchange.cvo.roguewave.com
Whole thread Raw
Responses Default privileges for new databases (was Re: Can't import large objects in most recent cvs)  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers

> -----Original Message-----
> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] 
> Sent: Friday, May 31, 2002 3:24 PM
> To: Ron Snyder
> Cc: pgsql-hackers
> Subject: Re: [HACKERS] Can't import large objects in most 
> recent cvs (20020531 -- approx 1pm PDT) 
> 
> 
> Ron Snyder <snyder@roguewave.com> writes:
> > I attempt to restore from a 7.2.1 created dump into my newly created
> > 7.3devel database, I get this:
> 
> > pg_restore: [archiver (db)] could not create large object 
> cross-reference
> > table:
> 
> > I didn't find any mention of this on the hackers mail 
> archive, so I thought
> > I'd pass it on.
> 
> News to me; and I just tested that code a couple days ago 
> after hacking
> on it for schema support.  Would you look in the postmaster log to see
> exactly what error message the backend is issuing?  Might help to run
> pg_restore with "PGOPTIONS=--debug_print_query=1" so you can 
> verify the
> exact query that's failing, too.

From the client:
COPY "unique_names" WITH OIDS FROM stdin;
LOG:  query: select getdatabaseencoding()
pg_restore: LOG:  query: Create Temporary Table pg_dump_blob_xref(oldOid
pg_catalog.oid, newOid pg_catalog.oid);
pg_restore: [archiver (db)] could not create large object cross-reference
table:

From the server:
May 31 15:58:15 vault pgcvs[366]: [5-5] -- Name: unique_names Type: TABLE
DATA Schema: - Owner: qvowner
May 31 15:58:15 vault pgcvs[366]: [5-6] -- Data Pos: 30713831 (Length 1214)
May 31 15:58:15 vault pgcvs[366]: [5-7] --
May 31 15:58:15 vault pgcvs[366]: [5-8] COPY "unique_names" WITH OIDS FROM
stdin;
May 31 15:58:15 vault pgcvs[367]: [1] LOG:  connection received:
host=[local]
May 31 15:58:15 vault pgcvs[367]: [2] LOG:  connection authorized:
user=qvowner database=quickview
May 31 15:58:15 vault pgcvs[367]: [3] LOG:  query: select
getdatabaseencoding()
May 31 15:58:15 vault pgcvs[367]: [4] LOG:  query: Create Temporary Table
pg_dump_blob_xref(oldOid pg_catalog.oid, newOid pg_catalog.oid);

(and then a later run with a higher debug level)
May 31 16:11:50 vault pgcvs[2135]: [77] LOG:  connection received:
host=[local]
May 31 16:11:50 vault pgcvs[2135]: [78] LOG:  connection authorized:
user=qvowner database=quickview
May 31 16:11:50 vault pgcvs[2135]: [79] DEBUG:
/usr/local/pgsql-20020531/bin/postmaster child[2135]: starting with (
May 31 16:11:50 vault pgcvs[2135]: [80] DEBUG:  ^Ipostgres
May 31 16:11:50 vault pgcvs[2135]: [81] DEBUG:  ^I-v131072
May 31 16:11:50 vault pgcvs[2135]: [82] DEBUG:  ^I-p
May 31 16:11:50 vault pgcvs[2135]: [83] DEBUG:  ^Iquickview
May 31 16:11:50 vault pgcvs[2135]: [84] DEBUG:  )
May 31 16:11:50 vault pgcvs[2135]: [85] DEBUG:  InitPostgres
May 31 16:11:50 vault pgcvs[2135]: [86] DEBUG:  StartTransactionCommand
May 31 16:11:50 vault pgcvs[2135]: [87] LOG:  query: select
getdatabaseencoding()
May 31 16:11:50 vault pgcvs[2135]: [88] DEBUG:  ProcessQuery
May 31 16:11:50 vault pgcvs[2135]: [89] DEBUG:  CommitTransactionCommand
May 31 16:11:50 vault pgcvs[2135]: [90] DEBUG:  StartTransactionCommand
May 31 16:11:50 vault pgcvs[2135]: [91] LOG:  query: Create Temporary Table
pg_dump_blob_xref(oldOid pg_catalog.oid, newOid pg_catalog.oid);
May 31 16:11:50 vault pgcvs[2135]: [92] DEBUG:  ProcessUtility
May 31 16:11:50 vault pgcvs[2135]: [93] ERROR:  quickview: not authorized to
create temp tables

Digging a bit, I've discovered this:
1) usesysid 1 owns the database in the old server, but all the tables are
owned by 'qvowner' (and others).
2) qvowner does not have dba privs

My theory is that I'm getting this last message (not authorized to create
temp tables) because the permissions have been tightened down.

I believe that I can safely change the ownership of the database in the old
server to qvowner, right? And run the pg_dump and pg_restore again? Or
should pg_restore connect as the superuser and just change ownership
afterwards?

-ron


> (I've thought several times that we should clean up pg_dump and
> pg_restore so that they report the failed query and backend message in
> *all* cases; right now they're pretty haphazard about it.)
> 
>             regards, tom lane
> 


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Can't import large objects in most recent cvs (20020531 -- approx 1pm PDT)
Next
From: Ron Snyder
Date:
Subject: Re: Can't import large objects in most recent cvs (2002