Re: Any way to speed up INSERT INTO - Mailing list pgsql-performance

From Andres Freund
Subject Re: Any way to speed up INSERT INTO
Date
Msg-id 0F7EBD27-C826-4172-ACA2-6B8A30BE8DEB@anarazel.de
Whole thread Raw
In response to Re: Any way to speed up INSERT INTO  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses RES: Any way to speed up INSERT INTO  (Edson Richter <edsonrichter@hotmail.com>)
List pgsql-performance
Hi,

On March 4, 2022 10:42:39 AM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>aditya desai <admad123@gmail.com> writes:
>> One of the service layer app is inserting Millions of records in a table
>> but one row at a time. Although COPY is the fastest way to import a file in
>> a table. Application has a requirement of processing a row and inserting it
>> into a table. Is there any way this INSERT can be tuned by increasing
>> parameters? It is taking almost 10 hours for just 2.2 million rows in a
>> table. Table does not have any indexes or triggers.
>
>Using a prepared statement for the INSERT would help a little bit.
>What would help more, if you don't expect any insertion failures,
>is to group multiple inserts per transaction (ie put BEGIN ... COMMIT
>around each batch of 100 or 1000 or so insertions).  There's not
>going to be any magic bullet that lets you get away without changing
>the app, though.
>
>It's quite possible that network round trip costs are a big chunk of your
>problem, in which case physically grouping multiple rows into each INSERT
>command (... or COPY ...) is the only way to fix it.  But I'd start with
>trying to reduce the transaction commit overhead.

Pipelining could also help.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.



pgsql-performance by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: Any way to speed up INSERT INTO
Next
From: Edson Richter
Date:
Subject: RES: Any way to speed up INSERT INTO