On Sun, Mar 29, 2020 at 9:50 PM Andres Freund <andres@anarazel.de> wrote:
> On March 29, 2020 11:24:32 AM PDT, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:
> > clearly a big win on majority
> >of workloads, I think we still need to investigate different workloads
> >on different hardware to ensure there is no regression.
>
> Definitely. Which workloads are you thinking of? I can think of those affected facets: snapshot speed, commit speed
withwrites, connection establishment, prepared transaction speed. All in the small and large connection count cases.
Following pgbench scripts comes first to my mind:
1) SELECT txid_current(); (artificial but good for checking corner case)
2) Single insert statement (as example of very short transaction)
3) Plain pgbench read-write (you already did it for sure)
4) pgbench read-write script with increased amount of SELECTs. Repeat
select from pgbench_accounts say 10 times with different aids.
5) 10% pgbench read-write, 90% of pgbench read-only
> I did measurements on all of those but prepared xacts, fwiw
Great, it would be nice to see the results in the thread.
> That definitely needs to be measured, due to the locking changes around procarrayaddd/remove.
>
> I don't think regressions besides perhaps 2pc are likely - there's nothing really getting more expensive but
procarrayadd/remove.
I agree that ProcArrayAdd()/Remove() should be first subject of
investigation, but other cases should be checked as well IMHO.
Regarding 2pc I can following scenarios come to my mind:
1) pgbench read-write modified so that every transaction is prepared
first, then commit prepared.
2) 10% of 2pc pgbench read-write, 90% normal pgbench read-write
3) 10% of 2pc pgbench read-write, 90% normal pgbench read-only
------
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company