Alvaro Herrera wrote:
> If others can try this patch to ensure it enables pg_dump to work on
> their databases, it would be great.
It doesn't seem to help if one field exceeds 1Gb, for instance when
inflated by a bin->hex translation.
postgres=# create table big as select pg_read_binary_file('data') as binarycol;
postgres=# select octet_length(binarycol) from big;octet_length
-------------- 1073700000
postgres=# copy big to '/var/tmp/big.copy';
ERROR: XX000: invalid memory alloc request size 2147400003
LOCATION: palloc, mcxt.c:903
Same problem with pg_dump.
OTOH, it improves the case where the cumulative size of field contents
for a row exceeds 1 Gb, but not any single field exceeds that size.
If splitting the table into 3 fields, each smaller than 512MB:
postgres=# create table big2 as selectsubstring(binarycol from 1 for 300*1024*1024) as b1,substring(binarycol from
1+300*1024*1024for 300*1024*1024) as b2 ,substring(binarycol from 1+600*1024*1024 for 400*1024*1024) as b3
from big;
postgres=# copy big2 to '/var/tmp/big.copy';
COPY 1
then that works, producing a single line of 2097152012 chars
in the output file.
By contrast, it fails with an unpatched 9.5:
postgres=# copy big2 to '/var/tmp/big.copy';
ERROR: 54000: out of memory
DETAIL: Cannot enlarge string buffer containing 629145605 bytes by 629145602
more bytes.
LOCATION: enlargeStringInfo, stringinfo.c:260
If setting bytea_output to 'escape', it also fails with the patch applied,
as it tries to allocate 4x the binary field size, and it exceeds 1GB again.
postgres=# set bytea_output =escape;
SET
postgres=# copy big2 to '/var/tmp/big.copy';
ERROR: invalid memory alloc request size 1258291201
LOCATION: palloc, mcxt.c:821
1258291201 = 300*1024*1024*4+1
Also, the COPY of both tables work fine if using (FORMAT BINARY),
on both the patched and unpatched server.
Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite