Re: How batch processing works - Mailing list pgsql-general

From Lok P
Subject Re: How batch processing works
Date
Msg-id CAKna9Vbt1VJu7Oa8FTWasgby+-kJn7omOhbfmWzkdpVwBiqNzQ@mail.gmail.com
Whole thread Raw
In response to Re: How batch processing works  (Michał Kłeczek <michal@kleczek.org>)
List pgsql-general


On Sat, Sep 21, 2024 at 9:51 AM Michał Kłeczek <michal@kleczek.org> wrote:
Hi,

> On 19 Sep 2024, at 07:30, Lok P <loknath.73@gmail.com> wrote:
>
[snip]
>
> Method-4
>
> INSERT INTO parent_table VALUES  (1, 'a'), (2, 'a');
> INSERT INTO child_table VALUES   (1,1, 'a'), (1,2, 'a');
> commit;

I’ve done some batch processing of JSON messages from Kafka in Java.
By far the most performant way was to:

1. Use prepared statements
2. Parse JSON messages in Postgres
3. Process messages in batches

All three can be achieved by using arrays to pass batches:

WITH parsed AS (
  SELECT msg::json FROM unnest(?)
),
parents AS (
  INSERT INTO parent SELECT … FROM parsed RETURNING ...
)
INSERT INTO child SELECT … FROM parsed…

Not the single parameter that you can bind to String[]

Hope that helps.


Got your point.
But wondering why we don't see any difference in performance between method-2 and method-3 above. So does it mean that,I am testing this in a wrong way or it's the expected behaviour and thus there is no meaning in converting the row by row inserts into a bulk insert, but just changing the commit frequency will do the same job in a row by row insert approach?

pgsql-general by date:

Previous
From: veem v
Date:
Subject: Re: IO related waits
Next
From: Lok P
Date:
Subject: Re: How batch processing works