Commit(?) overhead - Mailing list pgsql-performance

From Duncan Kinnear
Subject Commit(?) overhead
Date
Msg-id 1958188625.1607913.1554328761689.JavaMail.zimbra@mccarthy.co.nz
Whole thread Raw
Responses Re: Commit(?) overhead
List pgsql-performance
We have a very simple table, whose DDL is as follows: 

    CREATE TABLE public.next_id (
       id varchar(255) NOT NULL,
       next_value int8 NOT NULL,
       CONSTRAINT next_id_pk PRIMARY KEY (id)
    ); 

The table only has about 125 rows, and there are no indexes apart from the primary key constraint.

In DBeaver I am executing the following UPDATE query:

    UPDATE next_id SET next_value=next_value+1 WHERE id='Session';

If I point DBeaver to a server (localhost) running version:
   11.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20181127, 64-bit
it executes on average in about 50ms.

the EXPLAIN (ANALYSE, TIMING TRUE) of this query gives:

 Update on next_id  (cost=0.14..8.16 rows=1 width=36) (actual time=0.057..0.057 rows=0 loops=1)
   ->  Index Scan using next_id_pk on next_id  (cost=0.14..8.16 rows=1 width=36) (actual time=0.039..0.040 rows=1
loops=1)
         Index Cond: ((id)::text = 'Session'::text)
 Planning Time: 0.083 ms
 Execution Time: 0.089 ms

which is significantly less than 50ms.

Now, if I point DBeaver to a VM server on the same gigabit network switch, running version: 
   9.5.3 on i386-pc-solaris2.11, compiled by cc: Sun C 5.10 SunOS_i386 Patch 142363-07 2010/12/09, 64-bit
then the same query executes in about 2-3ms

The EXPLAIN output when executing the query on this server is:

 Update on next_id  (cost=0.27..8.29 rows=1 width=36) (actual time=0.062..0.062 rows=0 loops=1)
   ->  Index Scan using next_id_pkey on next_id  (cost=0.27..8.29 rows=1 width=36) (actual time=0.025..0.026 rows=1
loops=1)
         Index Cond: ((id)::text = 'Session'::text)
 Planning time: 0.083 ms
 Execution time: 0.096 ms

which you will see is virtually identical to the slower version.

Why is the query taking so much longer on the localhost server?

Not that the localhost machine is significantly faster in other metrics (CPU, file system, etc.)

I have also tried the query on another server on the same network switch running version: 
   10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.0, 64-bit
and the timings are very similar to those for 'localhost'. That is, approx 50ms on average.

Now, if I run the following FOR LOOP query:

    do $$
    begin
    for i in 1..10000 loop
        update NEXT_ID set next_value=next_value+1 where id='Session';
    end loop;
    end;
    $$;

Then this completes in about the same time on ALL of the servers - approximately 1.7s - which makes sense as 10,000
timesthe above plan/execute times is approx 1.7s.
 

So, to me this feels like some kind of COMMIT overhead of approx 50ms that the version 10 and version 11 servers are
experiencing.But I have no idea where to look to try and find where this time is being spent.
 

Note that the schemas of the databases on the 3 servers involved are virtually identical. The schema for this table is
exactlythe same.
 

Hoping that someone can give me an idea about where to go looking.


Regards, 

Duncan Kinnear 
    
Floor 1, 100 McLeod St, Hastings 4120, New Zealand 
PO Box 2006, Hastings 4153, New Zealand 
P: +64 6 871 5700 F: +64 6 871 5709 E: duncan.kinnear@mccarthy.co.nz



pgsql-performance by date:

Previous
From: Laurenz Albe
Date:
Subject: Re: parallel query
Next
From: Jeff Janes
Date:
Subject: Re: Commit(?) overhead