Re: Commit(?) overhead - Mailing list pgsql-performance

From Jeff Janes
Subject Re: Commit(?) overhead
Date
Msg-id CAMkU=1ysZti=P=v9C62VOg1kXvVdPDxWUCsrE6qjhhbDo_D74g@mail.gmail.com
Whole thread Raw
In response to Commit(?) overhead  (Duncan Kinnear <duncan.kinnear@mccarthy.co.nz>)
Responses Re: Commit(?) overhead
List pgsql-performance
On Thu, Apr 4, 2019 at 3:42 AM Duncan Kinnear <duncan.kinnear@mccarthy.co.nz> wrote:

the EXPLAIN (ANALYSE, TIMING TRUE) of this query gives:

 Update on next_id  (cost=0.14..8.16 rows=1 width=36) (actual time=0.057..0.057 rows=0 loops=1)
   ->  Index Scan using next_id_pk on next_id  (cost=0.14..8.16 rows=1 width=36) (actual time=0.039..0.040 rows=1 loops=1)
         Index Cond: ((id)::text = 'Session'::text)
 Planning Time: 0.083 ms
 Execution Time: 0.089 ms

which is significantly less than 50ms.

The EXPLAIN ANALYZE doesn't include the time needed to fsync the transaction logs.  It measures only the update itself, not the implicit commit at the end.  DBeaver is seeing the fsync-inclusive time.  50ms is pretty long, but some file systems and OSes seem to be pretty inefficient at this and take several disk revolutions to get the data down.
 

Now, if I point DBeaver to a VM server on the same gigabit network switch, running version:
   9.5.3 on i386-pc-solaris2.11, compiled by cc: Sun C 5.10 SunOS_i386 Patch 142363-07 2010/12/09, 64-bit
then the same query executes in about 2-3ms

That machine probably has hardware to do a fast fsync, has fsync turned off, or is lying about the safety of its data.

Cheers,

Jeff

pgsql-performance by date:

Previous
From: Duncan Kinnear
Date:
Subject: Commit(?) overhead
Next
From: "Mamet, Eric (GfK)"
Date:
Subject: monitoring options for postgresql under AWS/RDS?