Re: Slow planning time for simple query - Mailing list pgsql-general

From Maksim Milyutin
Subject Re: Slow planning time for simple query
Date
Msg-id 7a5653e4-5a94-18d3-17e4-7e11ed2c9919@gmail.com
Whole thread Raw
In response to Re: Slow planning time for simple query  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Slow planning time for simple query  (Maksim Milyutin <milyutinma@gmail.com>)
Re: Slow planning time for simple query  (Maksim Milyutin <milyutinma@gmail.com>)
List pgsql-general

On 09.06.2018 22:49, Tom Lane wrote:

Maksim Milyutin <milyutinma@gmail.com> writes:
On hot standby I faced with the similar problem.
...
is planned 4.940 ms on master and *254.741* ms on standby.

(I wonder though why, if you executed the same query on the master,
its setting of the index-entry-is-dead bits didn't propagate to the
standby.)

I have verified the number dead item pointers (through pageinspect extension) in the first leaf page of index participating in query ('main.message_instance_pkey') on master and slave nodes and have noticed a big difference.

SELECT * FROM monitoring.bt_page_stats('main.message_instance_pkey', 3705);

On master:

 blkno | type | live_items | dead_items | avg_item_size | page_size | free_size | btpo_prev | btpo_next | btpo | btpo_flags
-------+------+------------+------------+---------------+-----------+-----------+-----------+-----------+------+------------
  3705 | l    |          1 |         58 |            24 |      8192 |      6496 |         0 |      3719 |    0 |         65

On standby:

 blkno | type | live_items | dead_items | avg_item_size | page_size | free_size | btpo_prev | btpo_next | btpo | btpo_flags
-------+------+------------+------------+---------------+-----------+-----------+-----------+-----------+------+------------
  3705 | l    |         59 |          0 |            24 |      8192 |      6496 |         0 |      3719 |    0 |          1


The vacuum routine improves the situation.
Сan there be something that I have incorrectly configured WAL logging or replication?


I wonder if we should extend the "SnapshotNonVacuumable" logic introduced
in commit 3ca930fc3 so that in hot standby, *all* index entries are deemed
non vacuumable.  This would essentially get rid of long standby planning
times in this sort of scenario by instead accepting worse (possibly much
worse) planner range estimates.  I'm unsure if that's a good tradeoff or
not.

I applied the patch introduced in this commit to test standby (not master; I don't know if this is correct) and haven't noticed any differences.

-- 
Regards,
Maksim Milyutin

pgsql-general by date:

Previous
From: Laurenz Albe
Date:
Subject: Re: How can I retrieve double or int data type for libpq
Next
From: Vadim Nevorotin
Date:
Subject: First query on each connection is too slow