On Tue, Nov 28, 2017 at 12:04 PM, Steven Lembark <lembark@wrkhors.com> wrote:
> On Tue, 28 Nov 2017 11:17:07 -0600
> Ted Toth <txtoth@gmail.com> wrote:
>
>> I'm writing a migration utility to move data from non-rdbms data
>> source to a postgres db. Currently I'm generating SQL INSERT
>> statements involving 6 related tables for each 'thing'. With 100k or
>> more 'things' to migrate I'm generating a lot of statements and when I
>> try to import using psql postgres fails with 'out of memory' when
>> running on a Linux VM with 4G of memory. If I break into smaller
>> chunks say ~50K statements then thde import succeeds. I can change my
>> migration utility to generate multiple files each with a limited
>> number of INSERTs to get around this issue but maybe there's
>> another/better way?
>
> Chunking the loads has a number of advantages other than avoiding
> OOM errors, not the least of which are possibly parallel loading
> and being able to restart after other failures without re-processing
> all of the input data.
>
> Note that 4GiB of core is not all that much by today's standards.
> You might want to run top/vmstat and ask if the PG daemons are using
> all/most of the available memory. If PG is sucking up all of the core
> as it is then tuning the database may not have much of an effect; if
> there is lots of spare memory then it'll be worth looking at ways to
> tune PG.
>
> Note also that "out of memory" frequently means virutal memory.
>
> Q: Does the VM have swap configured?
>
> If not then add 8GiB and see if that solves your problem; if so then
> how much swap is in use when you get the OOM error?
>
>
> --
> Steven Lembark 1505 National Ave
> Workhorse Computing Rockford, IL 61103
> lembark@wrkhors.com +1 888 359 3508
>
I understand that 4G is not much ... Yeah in top I see the postmaster
process RES grow until it fails. The VM is basically a Centos 6 box
with 4G of swap.