Re: Transaction commits VS Transaction commits (with parallel) VSquery mean time - Mailing list pgsql-hackers

From Haribabu Kommi
Subject Re: Transaction commits VS Transaction commits (with parallel) VSquery mean time
Date
Msg-id CAJrrPGcYz2908E0XzXXPEDS24+tg-WT0JL9_WhDvcg-ZBY_XQQ@mail.gmail.com
Whole thread Raw
In response to Transaction commits VS Transaction commits (with parallel) VS querymean time  (Haribabu Kommi <kommi.haribabu@gmail.com>)
Responses Re: Transaction commits VS Transaction commits (with parallel) VSquery mean time
List pgsql-hackers

On Thu, Feb 7, 2019 at 9:31 PM Haribabu Kommi <kommi.haribabu@gmail.com> wrote:
Hi Hackers,

Does increase in Transaction commits per second means good query performance?
Why I asked this question is, many monitoring tools display that number of transactions
per second in the dashboard (including pgadmin).

During the testing of bunch of queries with different set of configurations, I observed that
TPS of some particular configuration has increased compared to default server configuration, but the overall query execution performance is decreased after comparing all queries run time. 

This is because of larger xact_commit value than default configuration. With the changed server configuration, that leads to generate more parallel workers and every parallel worker operation is treated as an extra commit, because of this reason, the total number of commits increased, but the overall query performance is decreased.

Is there any relation of transaction commits to performance? 
 
Is there any specific reason to consider the parallel worker activity also as a transaction commit? Especially in my observation, if we didn't consider the parallel worker activity as separate commits, the test doesn't show an increase in transaction commits.

The following statements shows the increase in the xact_commit value with
parallel workers. I can understand that workers updating the seq_scan stats
as they performed the seq scan. Is the same applied to parallel worker transaction
commits also?

The transaction commit counter is updated with all the internal operations like
autovacuum, checkpoint and etc. The increase in counters with these operations
are not that visible compared to parallel workers.

The spike of TPS with parallel workers is fine to consider?

postgres=# select relname, seq_scan from pg_stat_user_tables where relname = 'tbl';
 relname | seq_scan 
---------+----------
 tbl     |       16
(1 row)

postgres=# begin;
BEGIN
postgres=# select xact_commit from pg_stat_database where datname = 'postgres';
 xact_commit 
-------------
         524
(1 row)

postgres=# explain analyze select * from tbl where f1 = 1000;
                                                    QUERY PLAN                                                     
-------------------------------------------------------------------------------------------------------------------
 Gather  (cost=0.00..3645.83 rows=1 width=214) (actual time=1.703..79.736 rows=1 loops=1)
   Workers Planned: 2
   Workers Launched: 2
   ->  Parallel Seq Scan on tbl  (cost=0.00..3645.83 rows=1 width=214) (actual time=28.180..51.672 rows=0 loops=3)
         Filter: (f1 = 1000)
         Rows Removed by Filter: 33333
 Planning Time: 0.090 ms
 Execution Time: 79.776 ms
(8 rows)

postgres=# commit;
COMMIT
postgres=# select xact_commit from pg_stat_database where datname = 'postgres';
 xact_commit 
-------------
         531
(1 row)

postgres=# select relname, seq_scan from pg_stat_user_tables where relname = 'tbl';
 relname | seq_scan 
---------+----------
 tbl     |       19
(1 row)

Regards,
Haribabu Kommi
Fujitsu Australia

pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: Problem while updating a foreign table pointing to a partitionedtable on foreign server
Next
From: "Imai, Yoshikazu"
Date:
Subject: RE: speeding up planning with partitions