Thread: Extend ALTER DEFAULT PRIVILEGES for large objects

Extend ALTER DEFAULT PRIVILEGES for large objects

From
Yugo NAGATA
Date:
Hi,

Currently, ALTER DEFAULT PRIVILEGE doesn't support large objects,
so if we want to allow users other than the owner to use the large
object, we need to grant a privilege on it every time a large object
is created. One of our clients feels that this is annoying, so I would
like propose to extend  ALTER DEFAULT PRIVILEGE to large objects. 

Here are the new actions allowed in abbreviated_grant_or_revoke;

+GRANT { { SELECT | UPDATE }
+    [, ...] | ALL [ PRIVILEGES ] }
+    ON LARGE OBJECTS
+    TO { [ GROUP ] role_name | PUBLIC } [, ...] [ WITH GRANT OPTION ]

+REVOKE [ GRANT OPTION FOR ]
+    { { SELECT | UPDATE }
+    [, ...] | ALL [ PRIVILEGES ] }
+    ON LARGE OBJECTS
+    FROM { [ GROUP ] role_name | PUBLIC } [, ...]
+    [ CASCADE | RESTRICT ]

A new keyword OBJECTS is introduced for using plural form in the syntax
as other supported objects. A schema name is not allowed to be specified
for large objects since any large objects don't belong to a schema.

The attached patch is originally proposed by Haruka Takatsuka
and some fixes and tests are made by me. 

Regards,
Yugo Nagata

-- 
Yugo NAGATA <nagata@sraoss.co.jp>

Attachment

Re: Extend ALTER DEFAULT PRIVILEGES for large objects

From
Tom Lane
Date:
Yugo NAGATA <nagata@sraoss.co.jp> writes:
> Currently, ALTER DEFAULT PRIVILEGE doesn't support large objects,
> so if we want to allow users other than the owner to use the large
> object, we need to grant a privilege on it every time a large object
> is created. One of our clients feels that this is annoying, so I would
> like propose to extend  ALTER DEFAULT PRIVILEGE to large objects. 

I wonder how this plays with pg_dump, and in particular whether it
breaks the optimizations that a45c78e32 installed for large numbers
of large objects.  The added test cases seem to go out of their way
to leave no trace behind that the pg_dump/pg_upgrade tests might
encounter.

I think you broke psql's \ddp, too.  And some other places; grepping
for DEFACLOBJ_NAMESPACE finds other oversights.

On the whole I find this proposed feature pretty unexciting
and dubiously worthy of the implementation/maintenance effort.

            regards, tom lane



Re: Extend ALTER DEFAULT PRIVILEGES for large objects

From
Yugo NAGATA
Date:
On Tue, 23 Apr 2024 23:47:38 -0400
Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Yugo NAGATA <nagata@sraoss.co.jp> writes:
> > Currently, ALTER DEFAULT PRIVILEGE doesn't support large objects,
> > so if we want to allow users other than the owner to use the large
> > object, we need to grant a privilege on it every time a large object
> > is created. One of our clients feels that this is annoying, so I would
> > like propose to extend  ALTER DEFAULT PRIVILEGE to large objects. 
> 
> I wonder how this plays with pg_dump, and in particular whether it
> breaks the optimizations that a45c78e32 installed for large numbers
> of large objects.  The added test cases seem to go out of their way
> to leave no trace behind that the pg_dump/pg_upgrade tests might
> encounter.

Thank you for your comments.

The previous patch did not work with pg_dump since I forgot some fixes.
I attached a updated patch including fixes.

I believe a45c78e32 is about already-existing large objects and does
not directly related to default privileges, so will not be affected
by this patch.

> I think you broke psql's \ddp, too.  And some other places; grepping
> for DEFACLOBJ_NAMESPACE finds other oversights.

Yes, I did. The attached patch include fixes for psql, too.
 
> On the whole I find this proposed feature pretty unexciting
> and dubiously worthy of the implementation/maintenance effort.

I believe this feature is beneficial to some users allows because
this enables to omit GRANT that was necessary every large object
creation. It seems to me that implementation/maintenance cost is not
so high compared to other objects (e.g. default privileges on schemas)
unless I am still missing something wrong. 

Regards,
Yugo Nagata

-- 
Yugo NAGATA <nagata@sraoss.co.jp>

Attachment

Re: Extend ALTER DEFAULT PRIVILEGES for large objects

From
Nathan Bossart
Date:
On Tue, Apr 23, 2024 at 11:47:38PM -0400, Tom Lane wrote:
> On the whole I find this proposed feature pretty unexciting
> and dubiously worthy of the implementation/maintenance effort.

I don't have any particularly strong feelings on $SUBJECT, but I'll admit
I'd be much more interested in resolving any remaining reasons folks are
using large objects over TOAST.  I see a couple of reasons listed in the
docs [0] that might be worth examining.

[0] https://www.postgresql.org/docs/devel/lo-intro.html

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com



Re: Extend ALTER DEFAULT PRIVILEGES for large objects

From
Yugo NAGATA
Date:
On Wed, 24 Apr 2024 16:08:39 -0500
Nathan Bossart <nathandbossart@gmail.com> wrote:

> On Tue, Apr 23, 2024 at 11:47:38PM -0400, Tom Lane wrote:
> > On the whole I find this proposed feature pretty unexciting
> > and dubiously worthy of the implementation/maintenance effort.
> 
> I don't have any particularly strong feelings on $SUBJECT, but I'll admit
> I'd be much more interested in resolving any remaining reasons folks are
> using large objects over TOAST.  I see a couple of reasons listed in the
> docs [0] that might be worth examining.
> 
> [0] https://www.postgresql.org/docs/devel/lo-intro.html

If we could replace large objects with BYTEA in any use cases, large objects
would be completely obsolete. However, currently some users use large objects
in fact, so improvement in this feature seems beneficial for them. 


Apart from that, extending TOAST to support more than 1GB data and
stream-style access seems a good challenge. I don't know if there was a
proposal for this in past. This is  just a thought, for this purpose, we
will need a new type of varlena that can contains large size information,
and a new toast table schema that can store offset information or some way
to convert a offset to chunk_seq.

Regards,
Yugo Nagata

> -- 
> Nathan Bossart
> Amazon Web Services: https://aws.amazon.com


-- 
Yugo NAGATA <nagata@sraoss.co.jp>



Re: Extend ALTER DEFAULT PRIVILEGES for large objects

From
Matthias van de Meent
Date:
On Fri, 26 Apr 2024 at 10:54, Yugo NAGATA <nagata@sraoss.co.jp> wrote:
>
> On Wed, 24 Apr 2024 16:08:39 -0500
> Nathan Bossart <nathandbossart@gmail.com> wrote:
>
> > On Tue, Apr 23, 2024 at 11:47:38PM -0400, Tom Lane wrote:
> > > On the whole I find this proposed feature pretty unexciting
> > > and dubiously worthy of the implementation/maintenance effort.
> >
> > I don't have any particularly strong feelings on $SUBJECT, but I'll admit
> > I'd be much more interested in resolving any remaining reasons folks are
> > using large objects over TOAST.  I see a couple of reasons listed in the
> > docs [0] that might be worth examining.
> >
> > [0] https://www.postgresql.org/docs/devel/lo-intro.html
>
> If we could replace large objects with BYTEA in any use cases, large objects
> would be completely obsolete. However, currently some users use large objects
> in fact, so improvement in this feature seems beneficial for them.
>
>
> Apart from that, extending TOAST to support more than 1GB data and
> stream-style access seems a good challenge. I don't know if there was a
> proposal for this in past. This is  just a thought, for this purpose, we
> will need a new type of varlena that can contains large size information,
> and a new toast table schema that can store offset information or some way
> to convert a offset to chunk_seq.

If you're interested in this, you may want to check out [0] and [1] as
threads on the topic of improving TOAST handling of large values ([1]
being a thread where the limitations of our current external TOAST
pointer became clear once more), and maybe talk with Aleksander
Alekseev and Nikita Malakhov. They've been working closely with
systems that involve toast pointers and their limitations.

The most recent update on the work of Nikita (reworking TOAST
handling) [2] is that he got started adapting their externally
pluggable toast into type-internal methods only, though I've not yet
noticed any updated patches appear on the list.

As for other issues with creating larger TOAST values:
TOAST has a value limit of ~1GB, which means a single large value (or
two, for that matter) won't break anything in the wire protocol, as
DataRow messages have a message size field of uint32 [^3]. However, if
we're going to allow even larger values to be stored in table's
attributes, we'll have to figure out how we're going to transfer those
larger values to (and from) clients. For large objects, this is much
less of an issue because the IO operations are already chunked by
design, but this may not work well for types that you want to use in
your table's columns.

Kind regards,

Matthias van de Meent

[0] https://postgr.es/m/flat/CAN-LCVMq2X%3Dfhx7KLxfeDyb3P%2BBXuCkHC0g%3D9GF%2BJD4izfVa0Q%40mail.gmail.com
[1] https://postgr.es/m/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22%3D5xVBg7S4vr5rQ%40mail.gmail.com
[2] https://postgr.es/m/CAN-LCVOgMrda9hOdzGkCMdwY6dH0JQa13QvPsqUwY57TEn6jww%40mail.gmail.com

[^3] Most, if not all PostgreSQL wire protocol messages have this
uint32 message size field, but the DataRow one is relevant here as
it's the one way users get their data out of the database.



Re: Extend ALTER DEFAULT PRIVILEGES for large objects

From
Yugo NAGATA
Date:
On Fri, 26 Apr 2024 12:23:45 +0200
Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:

> On Fri, 26 Apr 2024 at 10:54, Yugo NAGATA <nagata@sraoss.co.jp> wrote:
> >
> > On Wed, 24 Apr 2024 16:08:39 -0500
> > Nathan Bossart <nathandbossart@gmail.com> wrote:
> >
> > > On Tue, Apr 23, 2024 at 11:47:38PM -0400, Tom Lane wrote:
> > > > On the whole I find this proposed feature pretty unexciting
> > > > and dubiously worthy of the implementation/maintenance effort.
> > >
> > > I don't have any particularly strong feelings on $SUBJECT, but I'll admit
> > > I'd be much more interested in resolving any remaining reasons folks are
> > > using large objects over TOAST.  I see a couple of reasons listed in the
> > > docs [0] that might be worth examining.
> > >
> > > [0] https://www.postgresql.org/docs/devel/lo-intro.html
> >
> > If we could replace large objects with BYTEA in any use cases, large objects
> > would be completely obsolete. However, currently some users use large objects
> > in fact, so improvement in this feature seems beneficial for them.
> >
> >
> > Apart from that, extending TOAST to support more than 1GB data and
> > stream-style access seems a good challenge. I don't know if there was a
> > proposal for this in past. This is  just a thought, for this purpose, we
> > will need a new type of varlena that can contains large size information,
> > and a new toast table schema that can store offset information or some way
> > to convert a offset to chunk_seq.
> 
> If you're interested in this, you may want to check out [0] and [1] as
> threads on the topic of improving TOAST handling of large values ([1]
> being a thread where the limitations of our current external TOAST
> pointer became clear once more), and maybe talk with Aleksander
> Alekseev and Nikita Malakhov. They've been working closely with
> systems that involve toast pointers and their limitations.
> 
> The most recent update on the work of Nikita (reworking TOAST
> handling) [2] is that he got started adapting their externally
> pluggable toast into type-internal methods only, though I've not yet
> noticed any updated patches appear on the list.

Thank you for your information. I'll check the threads you mentioned.

> As for other issues with creating larger TOAST values:
> TOAST has a value limit of ~1GB, which means a single large value (or
> two, for that matter) won't break anything in the wire protocol, as
> DataRow messages have a message size field of uint32 [^3]. However, if
> we're going to allow even larger values to be stored in table's
> attributes, we'll have to figure out how we're going to transfer those
> larger values to (and from) clients. For large objects, this is much
> less of an issue because the IO operations are already chunked by
> design, but this may not work well for types that you want to use in
> your table's columns.

I overlooked this issue. I faced the similar issue when I tried to
pg_dump large text values, although the error was raised from
enlargeStringInfo() in that case....

Regards,
Yugo Nagata


> Kind regards,
> 
> Matthias van de Meent
> 
> [0] https://postgr.es/m/flat/CAN-LCVMq2X%3Dfhx7KLxfeDyb3P%2BBXuCkHC0g%3D9GF%2BJD4izfVa0Q%40mail.gmail.com
> [1] https://postgr.es/m/flat/CAJ7c6TOtAB0z1UrksvGTStNE-herK-43bj22%3D5xVBg7S4vr5rQ%40mail.gmail.com
> [2] https://postgr.es/m/CAN-LCVOgMrda9hOdzGkCMdwY6dH0JQa13QvPsqUwY57TEn6jww%40mail.gmail.com
> 
> [^3] Most, if not all PostgreSQL wire protocol messages have this
> uint32 message size field, but the DataRow one is relevant here as
> it's the one way users get their data out of the database.


-- 
Yugo NAGATA <nagata@sraoss.co.jp>