Thread: permission denied for large object 200936761
Hi!
Database does not contain large objects.
pg_dump starts to throw error
ERROR: permission denied for large object 200936761 Tried
select * from "200936761"
but it returned "relation does not exist"
How to fix this ? How to find which table causes this error ?
How to find and delete all large objects in database ? Maybe it is created accidently .
Using
PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
Andrus.
On 2/1/21 6:43 AM, Andrus wrote: > Hi! > > Database does not contain large objects. > > pg_dump starts to throw error > > ERROR: permission denied for large object 200936761 Did you do the pg_dump as a superuser? More below. > > Tried > > select * from "200936761" > > but it returned "relation does not exist" > > How to fix this ? How to find which table causes this error ? Large objects are stored in a system catalog(table): https://www.postgresql.org/docs/12/catalog-pg-largeobject.html You can query that to see what is there. I would not go about deleting until you find what the large objects are for. > > How to find and delete all large objects in database ? Maybe it is > created accidently . > > Using > > PostgreSQL 12.2 (Debian 12.2-2.pgdg100+1) on x86_64-pc-linux-gnu, > compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit > > Andrus. > -- Adrian Klaver adrian.klaver@aklaver.com
Database does not contain large objects.
pg_dump starts to throw error
ERROR: permission denied for large object 200936761
Did you do the pg_dump as a superuser?
No.
pg_dump needs to be invoked by non-superuser also. It backs up two schemas, public and firma74 .
-n public -n firma74
command line options are used.
You can query that to see what is there. I would not go about deleting until you find what the large objects are for.
select * from pg_largeobject
returns empty table.
Database has approx 50 schemas and many thousands of tables.
Andrus.
On 2/1/21 8:32 AM, Andrus wrote: > Hi! >>> >>> Database does not contain large objects. >>> >>> pg_dump starts to throw error >>> >>> ERROR: permission denied for large object 200936761 >> >> Did you do the pg_dump as a superuser? > > No. > > pg_dump needs to be invoked by non-superuser also. It backs up two > schemas, public and firma74 . Well the user that runs the pg_dump needs to have permissions on the large objects. For more information see below. > > -n public -n firma74 > > command line options are used. > >> >> You can query that to see what is there. I would not go about deleting >> until you find what the large objects are for. >> > select * from pg_largeobject > > returns empty table. I haven't used large objects in a while. Forgot that they now have permissions associated with them. Try: https://www.postgresql.org/docs/12/catalog-pg-largeobject-metadata.html instead. > > Database has approx 50 schemas and many thousands of tables. > > Andrus. > -- Adrian Klaver adrian.klaver@aklaver.com
Hi!
>Well the user that runs the pg_dump needs to have permissions on the large objects. For more information see below.
How to add permissions to non-superusers for this.?
GRANT command
GRANT { { SELECT | UPDATE } [, ...] | ALL [ PRIVILEGES ] } ON LARGE OBJECTloid
[, ...] TOrole_specification
[, ...] [ WITH GRANT OPTION ]
Requires large object id. How to grant backup privilege if there are no large objects in database ?
Should lo_compat_privileges
set in postgresql.conf or is there better way ?
-n public -n firma74
command line options are used.select * from pg_largeobject
You can query that to see what is there. I would not go about deleting until you find what the large objects are for.
returns empty table.
I haven't used large objects in a while. Forgot that they now have permissions associated with them. Try:
https://www.postgresql.org/docs/12/catalog-pg-largeobject-metadata.html
instead.
select * from pg_largeobject_metadata
returns 3 rows:
Oid Lomowner
200936761 30152
200936762 30152
200936767 30152
How to find table and schema which is referenced by this ?
Andrus.
On 2/1/21 9:13 AM, Andrus wrote: > Hi! > > >Well the user that runs the pg_dump needs to have permissions on the > large objects. For more information see below. > > How to add permissions to non-superusers for this.? > > GRANT command > > GRANT { { SELECT | UPDATE } [, ...] | ALL [ PRIVILEGES ] } > ON LARGE OBJECT/|loid|/ [, ...] > TO/|role_specification|/ [, ...] [ WITH GRANT OPTION ] > > Requires large object id. How to grant backup privilege if there are no > large objects in database ? Your query below says there are and also returns the oid for them. > > Should *lo_compat_privileges > * > > set in postgresql.conf or is there better way ? Changing lo_compat_privileges is an option but it is just moving the problem down the road. The actual fix is to find out what the large objects are there for, who owns them, are they still necessary?. Then based on the answers make the changes needed to future proof further interaction with them. > >>> -n public -n firma74 >>> >>> command line options are used. >>> >>>> >>>> You can query that to see what is there. I would not go about >>>> deleting until you find what the large objects are for. >>>> >>> select * from pg_largeobject >>> >>> returns empty table. >> > select * from pg_largeobject_metadata > > returns 3 rows: > > Oid Lomowner > > 200936761 30152 > > 200936762 30152 > > 200936767 30152 > > How to find table and schema which is referenced by this ? Large objects exist independent of those. The important part of the above is lomowner. Use that oid to find the role that owns the objects here: https://www.postgresql.org/docs/12/view-pg-roles.html select rolname from pg_roles where oid = <lomowner>; > > Andrus. > > -- Adrian Klaver adrian.klaver@aklaver.com
Hi!
>Large objects exist independent of those. The important part of the above is lomowner. Use that oid to find the role that owns the objects here:
select rolname from pg_roles where oid = 30152 ;
returns my role , clusteradmin . I have superuser rights:
CREATE ROLE clusteradmin WITH
LOGIN
SUPERUSER
INHERIT
NOCREATEDB
NOCREATEROLE
NOREPLICATION:
GRANT db1_owner, db2_owner, ... to clusteradmin;
It havent created any large objects.
How to use this information to fix the issue ?
Andrus.
On 2/1/21 9:55 AM, Andrus wrote: > Hi! > > >Large objects exist independent of those. The important part of the > above is lomowner. Use that oid to find the role that owns the objects > here: > > /select rolname from pg_roles where oid = 30152 ; // > / > > returns my role , clusteradmin . I have superuser rights: > > CREATE ROLE clusteradmin WITH > LOGIN > SUPERUSER > INHERIT > NOCREATEDB > NOCREATEROLE > NOREPLICATION: > > GRANT db1_owner, db2_owner, ... to clusteradmin; Not sure what the above is supposed to be doing? > > It havent created any large objects. What hasn't created large objects? > > How to use this information to fix the issue ? Do the pg_dump as user clusteradmin. > > Andrus. -- Adrian Klaver adrian.klaver@aklaver.com
>Large objects exist independent of those. The important part of the above is lomowner. Use that oid to find the role that owns the objects here:
/select rolname from pg_roles where oid = 30152 ; //
I showed the user definitionNot sure what the above is supposed to be doing?
I
It havent created any large objects.
What hasn't created large objects?
How to use this information to fix the issue ?
Do the pg_dump as user clusteradmin.
This works. However I need to allow non-supoeruser to create backup also. How to do this ?
Andrus.
On 2/1/21 12:07 PM, Andrus wrote: > Hi! >>> >>> >Large objects exist independent of those. The important part of the >>> above is lomowner. Use that oid to find the role that owns the >>> objects here: >>> >>> /select rolname from pg_roles where oid = 30152 ; // > >>> Not sure what the above is supposed to be doing? >> > I showed the user definition >>> >>> It havent created any large objects. >> >> What hasn't created large objects? > I >> >>> >>> How to use this information to fix the issue ? >> >> Do the pg_dump as user clusteradmin. > > This works. However I need to allow non-supoeruser to create backup > also. How to do this ? Short term grant the dump user permissions on the large objects. Long term figure out what they are and if they are needed or not. > > Andrus. > > -- Adrian Klaver adrian.klaver@aklaver.com
Hi!
>Long term figure out what they are and if they are needed or not.
Non-superuser backup worked earlier. It looks like large objects suddenly appeared in database:
select * from pg_largeobject_metadata
Oid Lomowner
200936761 30152
200936762 30152
200936767 30152
How to figure out what are large object with oids 200936761, 200936762 and 200936767 ?
Pd_dump throws error on first of them: 200936761
Andrus.
On 2/1/21 1:28 PM, Andrus wrote: > Hi! > > >Long term figure out what they are and if they are needed or not. > > Non-superuser backup worked earlier. It looks like large objects > suddenly appeared in database: > > select * from pg_largeobject_metadata > > Oid Lomowner > > 200936761 30152 > 200936762 30152 > 200936767 30152 > > How to figure out what are large object with oids 200936761, 200936762 > and 200936767 ? I misspoke earlier about large objects not being tied to a schema.table. They can be as a column of type oid. To see if they are try : SELECT relname, attname FROM pg_attribute AS pa JOIN pg_class AS pc ON pa.attrelid = pc.oid WHERE atttypid = 'oid'::regtype AND relnamespace = 'public'::regnamespace AND attnum > 0; Where relnamespace is the schema you are interested in. > > Pd_dump throws error on first of them: 200936761 > > Andrus. > -- Adrian Klaver adrian.klaver@aklaver.com
To see if they are try :
SELECT
relname,
attname
FROM
pg_attribute AS pa
JOIN pg_class AS pc ON pa.attrelid = pc.oid
WHERE
atttypid = 'oid'::regtype
AND relnamespace = 'public'::regnamespace
AND attnum > 0;
Where relnamespace is the schema you are interested in.
pg_dump which throws error is called with -n public -n firma74 parameters
I tried
SELECT
relname,
attname
FROM
pg_attribute AS pa
JOIN pg_class AS pc ON pa.attrelid = pc.oid
WHERE
atttypid = 'oid'::regtype
AND relnamespace in ( 'public'::regnamespace, 'firma74'::regnamespace )
AND attnum > 0;
It returs 0 rows. Andrus.
Hi!
>Obviously large objects *are* used.
>You have to grant the database use permissions with > GRANT SELECT ON LARGE OBJECT 200936761 TO dumpuser; >Alternatively, use the -B option of pg_dump to skip dumping >large objects.
I added -B option and changed postgresql.conf to lo_compat_privileges=on as temporary fix.
dumpuser has created backups of same two schemas for years without issues. The issue starts to occur today.
Application does not create large objects. It uses bytea columns instead.
How to figure out what is this large object ?
Andrus.
On 2/1/21 2:20 PM, Andrus wrote: > Hi! > > >Obviously large objects *are* used. > >> >You have to grant the database use permissions with >> >> > GRANT SELECT ON LARGE OBJECT 200936761 TO dumpuser; >> >> >Alternatively, use the -B option of pg_dump to skip dumping >> >large objects. > > I added -B option and changed postgresql.conf to > lo_compat_privileges=on as temporary fix. > > dumpuser has created backups of same two schemas for years without > issues. The issue starts to occur today. What code changed between the last backup and today? > > Application does not create large objects. It uses bytea columns instead. > > How to figure out what is this large object ? You could try some of the functions here: https://www.postgresql.org/docs/12/lo-funcs.html to see if you can figure it out. > > Andrus. > -- Adrian Klaver adrian.klaver@aklaver.com
Hi!
> What code changed between the last backup and today?
I have imported data from other clusters and executed lot of different sql commands. I have used grant, revoke, reassign commands to change privileges for other users and have deleted and added users.
Cluster contains 25 databases. There are 50 users executing SELECT, INSERT, UPDATE, DELETE, CREATE, DROP and similar commands.
dumpuser has rights only to one database and two schemas (public and firma74 ).
You could try some of the functions here:
https://www.postgresql.org/docs/12/lo-funcs.html
to see if you can figure it out.
There is only one function , lo_get() in this page which returns data. I tried
select * from lo_get(200936761);
select * from lo_get(200936762);
select * from lo_get(200936767);
Those queries returned one row containing one zero-length column lo_get.
Andrus.
On 2/1/21 3:07 PM, Andrus wrote: > Hi! > > > What code changed between the last backup and today? > > I have imported data from other clusters and executed lot of different > sql commands. I have used grant, revoke, reassign commands to change > privileges for other users and have deleted and added users. I don't suppose this was done in a structured way that could be gone back over? > > Cluster contains 25 databases. There are 50 users executing SELECT, > INSERT, UPDATE, DELETE, CREATE, DROP and similar commands. Yeah, but you are only concerned with one database and two schemas. > > dumpuser has rights only to one database and two schemas (public and > firma74 ). > > >> You could try some of the functions here: >> >> https://www.postgresql.org/docs/12/lo-funcs.html >> >> to see if you can figure it out. > > There is only one function , lo_get() in this page which returns data. I > tried Actually there is a second lo_export() at bottom of page. It needs superuser privilege and access to the server file system. > > select * from lo_get(200936761); > > select * from lo_get(200936762); > > select * from lo_get(200936767); > > Those queries returned one row containing one zero-length column lo_get. What happens if you query: https://www.postgresql.org/docs/12/catalog-pg-largeobject.html as a superuser? Do you see anything in the data field? > > Andrus. > > -- Adrian Klaver adrian.klaver@aklaver.com
Hi!
You can extract it with
\lo_export 200936761 'somefile' in psql and examine the file. Ask the people who use that database!
Tried
root@c202-76:~# ./pgsqlkaiv.sh psql (12.2 (Debian 12.2-2.pgdg100+1))
Type "help" for help.
sba=# \lo_export 200936761 'large200936761'
lo_export
sba=# \q
root@c202-76:~# ls -l large*
-rw-r--r-- 1 root root 0 veebr 2 10:45 large200936761
result file size is 0 .
Andrus.
>I have imported data from other clusters and executed lot of different sql commands. I have used grant, revoke, reassign commands to change privileges for other users and have deleted and added users.
I don't suppose this was done in a structured way that could be gone back over?
Exact command sequence cannot restored.
I have script to normalize rights for user. It removes all rights first and set desired rights afterwards.
This was executed lot of times, it is used for years. Also there were manual user rights adjustments using sql commands in cases there this universal script cannot used. There are approx. 300 postgres roles in cluster, users are changing in every week.
Previous dumpuser backup which suceeds was at January 4th, 9 MB in custom format. There are nightly backups of databases is cluster.
There is also hot standby, base backups in every sunday using pg_basebackup and WAL archiving. WAL archiving and hot standby was broken in previous week (I increased max_connections=400 in main server but forget to increase this in standby server, WAL archiving is also from hot standby server).
You could try some of the functions here:
https://www.postgresql.org/docs/12/lo-funcs.html
to see if you can figure it out.
There is only one function , lo_get() in this page which returns data. I tried
Actually there is a second lo_export() at bottom of page. It needs superuser privilege and access to the server file system.
Tried in server using psql
select lo_export(200936761,'large1');
select lo_export(200936762,'large2');
select lo_export(200936767,'large3');
result files have zero size. >What happens if you query:
https://www.postgresql.org/docs/12/catalog-pg-largeobject.html
as a superuser?
> Do you see anything in the data field?
select * from pg_largeobject
running as superuser returs empty table with 3 columns:
loid, pageno and data
Andrus.
I don't suppose this was done in a structured way that could be gone back over?
Accidently '200936767'::lo cast was issued :
INSERT INTO report ( ... ) values (.. , '200936767'::lo, ... )
server throws error type "lo" does not exist for this.
Maybe this causes orphan large object creation by server or by odbc driver. How to fix this ?
report table shoud not have lo type columns. No idea why this cast is generated using psqlodbc
Andrus.
Hi!
So? What is your point? Somebody created a large object of size 0.
report table has bytea column. It looks like psqlodbc driver adds ::lo cast when inserting binary data:
https://github.com/hlinnaka/psqlodbc/blob/master/convert.c#L4564
and this adds row to pg_largeobject_metadata table.
Why it adds cast to lo type ? This type does not exist in Postgres server and causes server error.
Andrus.
On 2/2/21 4:12 AM, Andrus wrote: > Hi! > > >> So? What is your point? >> Somebody created a large object of size 0. > > report table has bytea column. It looks like psqlodbc driver adds ::lo > cast when inserting binary data: > > https://github.com/hlinnaka/psqlodbc/blob/master/convert.c#L4564 > > and this adds row to pg_largeobject_metadata table. > > Why it adds cast to lo type ? This type does not exist in Postgres > server and causes server error. The comment for the code snippet you linked to is: "/* * the oid of the large object -- just put that in for the * parameter marker -- the data has already been sent to * the large object */" So at that point the deed has been done. The questions to ask: 1) Why the driver thinks it is being passed a large object in the first place? 2) Have there been any recent changes to code that passes through the ODBC driver that would account for 1)? 3) To help with 2), where is 'INSERT INTO report ( ... ) values (.. , '200936767'::lo, ... )" coming from? My suspicion is that it is user initiated change. If it is not and you suspect the ODBC driver then I would suggest bringing it up on the -odbc list: https://www.postgresql.org/list/pgsql-odbc/ > > Andrus. > > -- Adrian Klaver adrian.klaver@aklaver.com
So at that point the deed has been done.
The questions to ask:
1) Why the driver thinks it is being passed a large object in the first place?
Source data type was binary. It was mapped to oid for unknown reason.
2) Have there been any recent changes to code that passes through the ODBC driver that would account for 1)?
3) To help with 2), where is 'INSERT INTO report ( ... ) values (.. , '200936767'::lo, ... )" coming from?
My suspicion is that it is user initiated change.
This change was done by me. I added new reports from other database. Reports contain primary columns and import throws error about unknown lo type.
I removed bonary columns from import and after that import succeeds.
It looks like during this procces 3 rows were added to large object metadata table.
I used
select lo_unlink(oidnumber)
to remove them.
>If it is not and you suspect the ODBC driver then I would suggest bringing it up on the -odbc list:
>https://www.postgresql.org/list/pgsql-odbc/
I created binary data in client side creating FoxPro cursor
create cursor t ( t gen )
and used psqlodbc to insert this data:
create temp table test ( test bytea ) on commit drop;
insert into test values ( ?t.t );
This code throws exception
type "lo" does not exist
but each call adds new row to pg_largeobject_metadata table.
Odbc driver creates large object and adds lo cast. This large object remains even if transaction is rolled back due to unexisting lo type.
C7=0 (bytea as logvarbinary is false) is used in connection string.
Andrus.
On 2/2/21 9:05 AM, Andrus wrote: > Hi! > and used psqlodbc to insert this data: > > create temp table test ( test bytea ) on commit drop; > insert into test values ( ?t.t ); > > This code throws exception > > type "lo" does not exist > > but each call adds new row to pg_largeobject_metadata table. > > Odbc driver creates large object and adds lo cast. This large object > remains even if transaction is rolled back due to unexisting lo type. > > C7=0 (bytea as logvarbinary is false) is used in connection string. I think that is supposed to be true for what you want to do. It is has been awhile since I worked with ODBC so I would confirm on the -odbc list. > > Andrus. > > -- Adrian Klaver adrian.klaver@aklaver.com