Re: Large objects and out-of-memory - Mailing list pgsql-bugs

From Tom Lane
Subject Re: Large objects and out-of-memory
Date
Msg-id 543675.1608575245@sss.pgh.pa.us
Whole thread Raw
In response to Large objects and out-of-memory  (Konstantin Knizhnik <k.knizhnik@postgrespro.ru>)
Responses Re: Large objects and out-of-memory
List pgsql-bugs
Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:
> The following sequence of command cause backend's memory to exceed 10Gb:

> INSERT INTO image1 SELECT lo_creat(-1) FROM generate_series(1,10000000);
> REASSIGN OWNED BY alice TO testlo;

[ shrug... ]  You're asking to change the ownership of 10000000 objects.
This is not going to be a cheap operation.  AFAIK it's not going to be
any more expensive than changing the ownership of 10000000 tables, or
any other kind of object.

The argument for allowing large objects to have per-object ownership and
permissions in the first place was that useful scenarios wouldn't have a
huge number of them (else you'd run out of disk space, if they're actually
"large"), so we needn't worry too much about the overhead.

We could possibly bound the amount of space used in the inval queue by
switching to an "invalidate all" approach once we got to an unreasonable
amount of space.  But this will do nothing for the other costs involved,
and I'm not really sure it's worth adding complexity for.

            regards, tom lane



pgsql-bugs by date:

Previous
From: PG Bug reporting form
Date:
Subject: BUG #16784: Server crash in ExecReScanAgg()
Next
From: Tom Lane
Date:
Subject: Re: BUG #16784: Server crash in ExecReScanAgg()