Re: How to investigate what postgres is spending time on - Mailing list pgsql-general

From Chris Travers
Subject Re: How to investigate what postgres is spending time on
Date
Msg-id CAKt_ZfvFodk2LTYPKYH1Rr+gv_zqhSGxGU1YY0mSp8c9Ap2F_w@mail.gmail.com
Whole thread Raw
In response to How to investigate what postgres is spending time on  (Kim Rose Carlsen <krc@hiper.dk>)
List pgsql-general


On Sun, Sep 23, 2018 at 1:15 PM Kim Rose Carlsen <krc@hiper.dk> wrote:

Hi


I have some simple INSERT / UPDATE queries, that takes a long time the first time they are run in out test environment, but I'm not sure what postgres is doing and what I can do to help it. Whats common is that the table contains many rows in the order of about 20 millions.


Query:

INSERT INTO communication.request_parameter (request_id, template_version_parameter_id, parameter_value)
     VALUES (1222, 1211, 122) RETURNING request_parameter_id

Row from pg_stat_statements:
-----------+------------+------------+--------------------------------------------------------------+------------+------------------+------------+------------+--------------------+--------------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+- userid | dbid | queryid | query | calls | total_time | min_time | max_time | mean_time | stddev_time | rows | shared_blk | shared_blk | shared_blk | shared_blk | local_blks | local_blks | local_blks | local_blks | temp_blks_ | temp_blks_ | blk_read_t | blk_write_ | -----------+------------+------------+--------------------------------------------------------------+------------+------------------+------------+------------+--------------------+--------------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+------------+- 16385 | 16389 | 2064198912 | INSERT INTO communication.request_parameter (request_id, tem | 98 | 646.393451 | 0.036666 | 638.712758 | 6.59585154081633 | 64.1818799227704 | 98 | 2850 | 24 | 21 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
Description of table:
# \d communication.request_parameter
                                                         Table "communication.request_parameter"
            Column             |       Type        | Collation | Nullable |                                    Default                                    
-------------------------------+-------------------+-----------+----------+-------------------------------------------------------------------------------
 request_parameter_id          | integer           |           | not null | nextval('communication.request_parameter_request_parameter_id_seq'::regclass)
 request_id                    | integer           |           | not null |
 template_version_parameter_id | integer           |           | not null |
 parameter_value               | character varying |           |          |
Indexes:
    "request_parameter_pkey" PRIMARY KEY, btree (request_parameter_id)
    "request_parameter_parameter_value_idx" btree (parameter_value)
    "request_parameter_request_id_idx" btree (request_id)
    "request_parameter_template_version_parameter_id_idx" btree (template_version_parameter_id)
Foreign-key constraints:
    "request_parameter_request_id_fkey" FOREIGN KEY (request_id) REFERENCES communication.request(request_id)
    "request_parameter_template_version_parameter_id_fkey" FOREIGN KEY (template_version_parameter_id) REFERENCES communication.template_version_parameter(template_version_parameter_id)

This only happens in testing, and on a cold bootet database. The test database is constructed with pg_dump and restore on fresh postgres installation.

Sounds like warming up the cache, but still in a test environment you may want to add auto_explain to your list of preloads and perhaps set it to dump explain analyze when it hits a certain threshold.  Note that while dumping the query plans has very little overhead, timing the query plan nodes does impact performance in a negative way. 


Best Regards

Kim Carlsen



--
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor lock-in.

pgsql-general by date:

Previous
From: Kim Rose Carlsen
Date:
Subject: How to investigate what postgres is spending time on
Next
From: Adrian Klaver
Date:
Subject: Re: PostgreSQl, PHP and IIS