I think it might not be a issue with pg_dump / db etc but more to do with proxmox and it's container limits, this is a VM running on proxmox and could be the cause of the issue. Will look into that first and come back if it's not.
thanks Paul
On Tue, Oct 29, 2013 at 1:01 PM, Sergey Klochkov <klochkov@iqbuzz.ru> wrote:
Hello Paul,
Recently I've run into similar issue, see my message to this mailing list sent on October 1st, 2013. It happens if there is a large enough amount of rows in pg_largeobject of the relevant database. pg_dump loads metadata (owner, access rights, etc) of each large object deparately until it is out of memory. How many large objects are in your DB?
Hope this helps.
29.10.2013 16:17, Paul Warren пишет:
Hi,
I'm to the lists so bare with me please.
I have a vm running debian 6 with linux kernel 2.6.32-11-pve, when i run pg_dump -Fc -U postgres db name > db name.dump
after so long the process is killed, when I look in the /var/log/kern.log i see the following entry Oct 29 12:07:14 www1 kernel: OOM killed process 0 (pg_dump) vm:5269904kB, rss:5210224kB, swap:0kB
The vm has 16 cpu's and is using around 10%, when I run pg_dump it goes up by 3 / 5 % so not using alot of cpu usage. IT has 30gb of ram and 1gb of swap, the system uses around 2mb of swap and 15gb of ram, when pg_dump runs the swap doesn't move but the ram goes up by 3 / 4 gb before the process is killed.
The machine had only 20gb of ram so I've increased the ram usage and still get the same issue. if I run pg_dump on any other db it works fine, if we run a pg_dump script by table on the failing database it works fine.
The backup is going directly to a nas device which has 1.2 tb free. The vm has aroudn 40gb free as well.
Any ideas what this could be? it seems to be a memory issue but I can't seem to see what / why it's happening.