Re: Why insertion throughput can be reduced with an increase of batch size? - Mailing list pgsql-general

From Adrian Klaver
Subject Re: Why insertion throughput can be reduced with an increase of batch size?
Date
Msg-id 2e81486a-beba-57b7-9bb0-5d6204b4d652@aklaver.com
Whole thread Raw
In response to Why insertion throughput can be reduced with an increase of batch size?  (Павел Филонов <filonovpv@gmail.com>)
List pgsql-general
On 08/21/2016 11:53 PM, Павел Филонов wrote:
> My greetings to everybody!
>
> I recently faced with the observation which I can not explain. Why
> insertion throughput can be reduced with an increase of batch size?
>
> Brief description of the experiment.
>
>   * PostgreSQL 9.5.4 as server
>   * https://github.com/sfackler/rust-postgres library as client driver
>   * one relation with two indices (scheme in attach)
>
> Experiment steps:
>
>   * populate DB with 259200000 random records
>   * start insertion for 60 seconds with one client thread and batch size = m
>   * record insertions per second (ips) in clients code
>
> Plot median ips from m for m in [2^0, 2^1, ..., 2^15] (in attachment).
>
>
> On figure with can see that from m = 128 to m = 256 throughput have been
> reduced from 13 000 ips to 5000.
>
> I hope someone can help me understand what is the reason for such behavior?

To add to Jeff's questions:

You say you are measuring the IPS in the clients code.

Where is the client, on the same machine, same network or remote network?

>
> --
> Best regards
> Filonov Pavel
>
>
>


--
Adrian Klaver
adrian.klaver@aklaver.com


pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: Unique constraint on field inside composite type.
Next
From: Tom Lane
Date:
Subject: Re: Unique constraint on field inside composite type.