Re: Batch Inserts - Mailing list pgsql-general

From jco@cornelius-olsen.dk
Subject Re: Batch Inserts
Date
Msg-id OF82825A9E.E13F3CD8-ONC1256C8D.0004C8E7@dk
Whole thread Raw
List pgsql-general

Hi Doug,

The latter is the case. Only one transaction is done because transactions cannot be nested and so when you use explicit begin-commit, no autocommit is done.

/Jørn Cornelius Olsen



Doug Fields <dfields-pg-general@pexicom.com>
Sent by: pgsql-general-owner@postgresql.org

12-12-2002 00:03

       
        To:        "Ricardo Ryoiti S. Junior" <suga@netbsd.com.br>
        cc:        pgsql-general@postgresql.org, pgsql-jdbc@postgresql.org
        Subject:        Re: [GENERAL] Batch Inserts



Hi Ricardo, list,

One quick question:

>         - If your "data importing" is done via inserts, make sure that the
>batch uses transactions for each (at least or so) 200 inserts. If you
>don't, each insert will be a transaction, what will slow down you.

I use JDBC and use it with the default "AUTOCOMMIT ON."

Does doing a statement, in one JDBC execution, of the form:

BEGIN WORK; INSERT ... ; INSERT ... ; INSERT ...; COMMIT;

Count as N individual inserts (due to the autocommit setting) or does the
BEGIN WORK;...COMMIT; surrounding it override that setting?

Thanks,

Doug


---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


pgsql-general by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: Potentially serious migration issue from 7.1.3 to 7.2
Next
From: Medi Montaseri
Date:
Subject: PQexec and timeouts