Thread: Re: Batch Inserts

Re: Batch Inserts

From
jco@cornelius-olsen.dk
Date:

Hi Doug,

The latter is the case. Only one transaction is done because transactions cannot be nested and so when you use explicit begin-commit, no autocommit is done.

/Jørn Cornelius Olsen



Doug Fields <dfields-pg-general@pexicom.com>
Sent by: pgsql-general-owner@postgresql.org

12-12-2002 00:03

       
        To:        "Ricardo Ryoiti S. Junior" <suga@netbsd.com.br>
        cc:        pgsql-general@postgresql.org, pgsql-jdbc@postgresql.org
        Subject:        Re: [GENERAL] Batch Inserts



Hi Ricardo, list,

One quick question:

>         - If your "data importing" is done via inserts, make sure that the
>batch uses transactions for each (at least or so) 200 inserts. If you
>don't, each insert will be a transaction, what will slow down you.

I use JDBC and use it with the default "AUTOCOMMIT ON."

Does doing a statement, in one JDBC execution, of the form:

BEGIN WORK; INSERT ... ; INSERT ... ; INSERT ...; COMMIT;

Count as N individual inserts (due to the autocommit setting) or does the
BEGIN WORK;...COMMIT; surrounding it override that setting?

Thanks,

Doug


---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html