Francisco Reyes wrote:
> Tom Lane writes:
>
> >What's more, because the line and field buffers are StringInfos that are
> >intended for reuse across multiple lines/fields, they're not simply made
> >equal to the exact size of the big field. They're rounded up to the
> >next power-of-2, ie, if you've read an 84MB field during the current
> >COPY IN then they'll be 128MB apiece. In short, COPY is going to need
> >508MB of process-local RAM to handle this row.
>
> Of shared memory?
No
> I am a little confused,yesterday you said that increasing shared_buffers
> may be counterproductive.
Yes, that's what he said.
> Or you are referring to the OS size?
Yes
> The OS size is 1.6GB, but today I am going to try increasing kern.maxssiz.
> Vivek recommended increasing it
The problem is probably the ulimit. I don't know what kern.maxssiz is
though.
> >In short, you need a bigger per-process memory allowance.
>
> I wrote a mini python program to copy one of the records that is failing.
> The client program is using 475MB with 429MB resident.
>
> The server has been running all night on this single insert.
> The server is using 977MB with 491MB resident.
> Yesterday I saw it grow as big as 1000MB with 900MB+ resident.
Can you send the program along? And the table definition (including
indexes, etc)?
--
Alvaro Herrera Developer, http://www.PostgreSQL.org/
"Find a bug in a program, and fix it, and the program will work today.
Show the program how to find and fix a bug, and the program
will work forever" (Oliver Silfridge)