Re: large numbers of inserts out of memory strategy - Mailing list pgsql-general

From Steven Lembark
Subject Re: large numbers of inserts out of memory strategy
Date
Msg-id 20171201114942.177b394c@wrkhors.com
Whole thread Raw
In response to Re: large numbers of inserts out of memory strategy  (Ted Toth <txtoth@gmail.com>)
List pgsql-general
On Thu, 30 Nov 2017 08:43:32 -0600
Ted Toth <txtoth@gmail.com> wrote:

> What is the downside of using a DO block? I'd have to do a nextval on
> each sequence before I could use currval, right? Or I could do 'select
> last_value from <sequence>'.

You are creating a piece of code that has to be parsed, tokenized,
and compiled prior to execution. What's biting you is that you've
created a function the size of your dataset.

If you like do-blocks then write a short block to insert one record
using placeholders and call it a few zillion times.

That or (in DBI-speak):


    eval
    {
        $dbh->{ RaiseError  } = 1;
        $dbh->{ AutoCommit  } = 0;

        my $sth = $dbh->prepare
        (
            'insert into yourtable ( field field ) values ( $1, $2 )'
        );

        $sth->do( @$_ ) for @rows;

        $dbh->commit
    }
    or die "Failed execution: $@";

which will be nearly as effecient in the long run.

That or just import the data from a csv/tsv (there are good 
examples of data import available in the PG docs).

-- 
Steven Lembark                                       1505 National Ave
Workhorse Computing                                 Rockford, IL 61103
lembark@wrkhors.com                                    +1 888 359 3508


pgsql-general by date:

Previous
From: Ted Toth
Date:
Subject: Re: large numbers of inserts out of memory strategy
Next
From: support-tiger
Date:
Subject: pg data backup from vps