Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
> On 03/21/2018 02:18 PM, Jaime Soler wrote:
>> We still get out of memory error during pg_dump execution
>> pg_dump: reading large objects
>> out of memory
> Hmmmm ... that likely happens because of this for loop copying a lot of
> data:
> https://github.com/postgres/postgres/blob/master/src/bin/pg_dump/pg_dump.c#L3258
The long and the short of it is that too many large objects *will*
choke pg_dump; this has been obvious since we decided to let it treat
large objects as heavyweight objects. See eg
https://www.postgresql.org/message-id/29613.1476969807@sss.pgh.pa.us
I don't think there's any simple fix available. We discussed some
possible solutions in
https://www.postgresql.org/message-id/flat/5539483B.3040401%40commandprompt.com
but none of them looked easy. The best short-term answer is "run
pg_dump in a less memory-constrained system".
regards, tom lane