Dne 25.4.2011 19:31, Alban Hertroys napsal(a):
> On 25 Apr 2011, at 18:16, Phoenix Kiula wrote:
>
>> If I COPY each individual file back into the table, it works. Slowly,
>> but seems to work. I tried to combine all the files into one go, then
>> truncate the table, and pull it all in in one go (130 million rows or
>> so) but this time it gave the same error. However, it pointed out a
>> specific row where the problem was:
>>
>>
>> COPY links, line 15272357:
>> "16426447 9s2q7 9s2q7 N
http://www.amazon.com/gp/search?camp=1789&creative=9325&ie=UTF8&i..."
>> server closed the connection unexpectedly
>> This probably means the server terminated abnormally
>> before or while processing the request.
>> The connection to the server was lost. Attempting reset: Failed.
>>
>>
>> Is this any use at all? Would appreciate any pointers!
>
>
> I didn't follow the entire thread, so maybe someone mentioned this already, but...
> Usually if we see error messages like those it turns out the OS is killing the postgres process with it's equivalent
ofa low-on-memory-killer. I know Linux's got such a beast, and that you can turn it off.
>
> It's a frequently recurring issue on this list, there's bound to be some pointers in the archives ;)
Not sure if this COPY failure is caused by the same issue as before, but
the original issue was caused by this
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: invalid memory alloc
request size 4294967293
pg_dump: The command was: COPY public.links (id, link_id, alias,
aliasentered, url, user_known, user_id, url_encrypted, title, private,
private_key, status, create_date, modify_date, disable_in_statistics,
user_running_id, url_host_long) TO stdout;
pg_dumpall: pg_dump failed on database "snipurl", exiting
i.e. a bad memory alloc request (with negative size). That does not seem
like an OOM killing the backend.
regards
Tomas