Re: Performance degradation on concurrent COPY into a single relation in PG16. - Mailing list pgsql-hackers

From Masahiko Sawada
Subject Re: Performance degradation on concurrent COPY into a single relation in PG16.
Date
Msg-id CAD21AoA65X0zVqrMH6=LVZkLkjSSQqsC_ugre0-QwkaGE2Y=3A@mail.gmail.com
Whole thread Raw
In response to Re: Performance degradation on concurrent COPY into a single relation in PG16.  (Andres Freund <andres@anarazel.de>)
Responses Re: Performance degradation on concurrent COPY into a single relation in PG16.
List pgsql-hackers
On Tue, Aug 8, 2023 at 3:10 AM Andres Freund <andres@anarazel.de> wrote:
>
> Hi,
>
> On 2023-08-07 23:05:39 +0900, Masahiko Sawada wrote:
> > On Mon, Aug 7, 2023 at 3:16 PM David Rowley <dgrowleyml@gmail.com> wrote:
> > >
> > > On Wed, 2 Aug 2023 at 13:35, David Rowley <dgrowleyml@gmail.com> wrote:
> > > > So, it looks like this item can be closed off.  I'll hold off from
> > > > doing that for a few days just in case anyone else wants to give
> > > > feedback or test themselves.
> > >
> > > Alright, closed.
> >
> > IIUC the problem with multiple concurrent COPY is not resolved yet.
>
> Yea - it was just hard to analyze until the other regressions were fixed.
>
>
> > The result of nclients = 1 became better thanks to recent fixes, but
> > there still seems to be the performance regression at nclient = 2~16
> > (on RHEL 8 and 9). Andres reported[1] that after changing
> > MAX_BUFFERED_TUPLES to 5000 the numbers became a lot better but it
> > would not be the solution, as he mentioned.
>
> I think there could be a quite simple fix: Track by how much we've extended
> the relation previously in the same bistate. If we already extended by many
> blocks, it's very likey that we'll do so further.
>
> A simple prototype patch attached. The results for me are promising. I copied
> a smaller file [1], to have more accurate throughput results at shorter runs
> (15s).

Thank you for the patch!

>
> HEAD before:
> clients      tps
> 1             41
> 2             76
> 4            136
> 8            248
> 16           360
> 32           375
> 64           317
>
>
> HEAD after:
> clients      tps
> 1             43
> 2             80
> 4            155
> 8            280
> 16           369
> 32           405
> 64           344
>
> Any chance you could your benchmark? I don't see as much of a regression vs 16
> as you...

Sure. The results are promising for me too:

 nclients = 1, execution time = 13.743
 nclients = 2, execution time = 7.552
 nclients = 4, execution time = 4.758
 nclients = 8, execution time = 3.035
 nclients = 16, execution time = 2.172
 nclients = 32, execution time = 1.959
nclients = 64, execution time = 1.819
nclients = 128, execution time = 1.583
nclients = 256, execution time = 1.631

Here are results of the same benchmark test you used:

w/o patch:
clients    tps
1       66.702
2       59.696
4       73.783
8       168.115
16      400.134
32      574.098
64      565.373
128     526.303
256     591.751

w/ patch:
clients   tps
1       67.735
2       122.534
4       240.707
8       398.944
16      541.097
32      643.083
64      614.775
128     616.007
256     577.885

Regards,

--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com



pgsql-hackers by date:

Previous
From: shveta malik
Date:
Subject: Re: Synchronizing slots from primary to standby
Next
From: Peter Geoghegan
Date:
Subject: Re: Use of additional index columns in rows filtering