Re: pg_dump / copy bugs with "big lines" ? - Mailing list pgsql-hackers

From Daniel Verite
Subject Re: pg_dump / copy bugs with "big lines" ?
Date
Msg-id 28a1f376-e006-4ecf-93f5-133737652c5c@mm
Whole thread Raw
In response to Re: pg_dump / copy bugs with "big lines" ?  ("Daniel Verite" <daniel@manitou-mail.org>)
Responses Re: pg_dump / copy bugs with "big lines" ?
List pgsql-hackers
    Daniel Verite wrote:

> # \copy bigtext2 from '/var/tmp/bigtext.sql'
> ERROR:  54000: out of memory
> DETAIL:  Cannot enlarge string buffer containing 1073741808 bytes by 8191
> more bytes.
> CONTEXT:  COPY bigtext2, line 1
> LOCATION:  enlargeStringInfo, stringinfo.c:278

To go past that problem, I've tried tweaking the StringInfoData
used for COPY FROM, like the original patch does in CopyOneRowTo.

It turns out that it fails a bit later when trying to make a tuple
from the big line, in heap_form_tuple():
 tuple = (HeapTuple) palloc0(HEAPTUPLESIZE + len);

which fails because (HEAPTUPLESIZE + len) is again considered
an invalid size, the  size being 1468006476 in my test.

At this point it feels like a dead end, at least for the idea that extending
StringInfoData might suffice to enable COPYing such large rows.

Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite



pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: Password identifiers, protocol aging and SCRAM protocol
Next
From: Stas Kelvich
Date:
Subject: Re: async replication code