Re: batch insertion - Mailing list pgsql-general

From Adrian Klaver
Subject Re: batch insertion
Date
Msg-id 52195811.4020702@gmail.com
Whole thread Raw
In response to batch insertion  (Korisk <korisk@yandex.ru>)
List pgsql-general
On 08/24/2013 05:15 PM, Korisk wrote:
> Hi!
> I want quick insert into db a lot of data (in form of triplets). Data is formed dynamical so "COPY" is not suitable.
> I tried batch insert like this:
>
> insert into triplets values (1,1,1);
> insert into triplets values (1,1,1), (3,2,5), (4,5,5);
> ...
> insert into triplets values (1,1,1), (3,2,5), (4,5,5) .... ;
>
> The more triplets I use the quicker operation is.
> With preparation it looks like this:
>
> res = PQprepare(conn, "qu", "insert into triplets values ($1::bigint, $2::bigint, $3::float);",3, NULL);
> ...
> res = PQprepare(conn, "qu", "insert into triplets values ($1::bigint, $2::bigint, $3::float), ($4::bigint,
$5::bigint,$6::float), ($7::bigint, $8::bigint, $9::float), ($10::bigint, $11::bigint, $12::float);",12, NULL); 
> ...
>
> The question:
> Is there any way to prepare query with any number of triplets without casting such a long string?,

Others have already posted about using COPY, so I will go another route.
The increased speed you see is probably a result of more data being
included in each transaction. From your example it is not clear if you
are batching your INSERTs. If not that is another way go, prepare your
statement then loop through your data in batches where a batch is
between a BEGIN and a COMMIT. This is one of the reasons COPY is so
fast, the data is dumped inside a single transaction. Unfortunately I do
not use libpq so I cannot provide an example.

>
> Thank you.
>
>


--
Adrian Klaver
adrian.klaver@gmail.com


pgsql-general by date:

Previous
From: Steve Atkins
Date:
Subject: Re: batch insertion
Next
From: "Janek Sendrowski"
Date:
Subject: how to use aggregate functions in this case