Re: Troubles dumping a very large table. - Mailing list pgsql-performance

From Ted Allen
Subject Re: Troubles dumping a very large table.
Date
Msg-id 49583FF3.8030105@blackducksoftware.com
Whole thread Raw
In response to Re: Troubles dumping a very large table.  ("Merlin Moncure" <mmoncure@gmail.com>)
List pgsql-performance
I was hoping use pg_dump and not to have to do a manual dump but if that
latest solution (moving rows >300mb elsewhere and dealing with them
later) does not work I'll try that.

Thanks everyone.

Merlin Moncure wrote:
> On Fri, Dec 26, 2008 at 12:38 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>
>> Ted Allen <tallen@blackducksoftware.com> writes:
>>
>>> 600mb measured by get_octet_length on data.  If there is a better way to measure the row/cell size, please let me
knowbecause we thought it was the >1Gb problem too.  We thought we were being conservative by getting rid of the larger
rowsbut I guess we need to get rid of even more. 
>>>
>> Yeah, the average expansion of bytea data in COPY format is about 3X :-(
>> So you need to get the max row length down to around 300mb.  I'm curious
>> how you got the data in to start with --- were the values assembled on
>> the server side?
>>
>
> Wouldn't binary style COPY be more forgiving in this regard?  (if so,
> the OP might have better luck running COPY BINARY)...
>
> This also goes for libpq traffic..large (>1mb) bytea definately want
> to be passed using the binary switch in the protocol.
>
> merlin
>


pgsql-performance by date:

Previous
From: Greg Smith
Date:
Subject: Re: Bgwriter and pg_stat_bgwriter.buffers_clean aspects
Next
From: Laszlo Nagy
Date:
Subject: Re: Slow table update