Re: Seeing high query planning time on Azure Postgres Single Server version 11. - Mailing list pgsql-general

From Tom Lane
Subject Re: Seeing high query planning time on Azure Postgres Single Server version 11.
Date
Msg-id 1068645.1710007250@sss.pgh.pa.us
Whole thread Raw
In response to Re: Seeing high query planning time on Azure Postgres Single Server version 11.  (hassan rafi <haassaan.khann@gmail.com>)
Responses Re: Seeing high query planning time on Azure Postgres Single Server version 11.
List pgsql-general
hassan rafi <haassaan.khann@gmail.com> writes:
> The issue of high query planning time seems to intermittently resolve
> itself, only to reoccur after a few hours.

I wonder if you are running into the lack of this fix:

Author: Tom Lane <tgl@sss.pgh.pa.us>
Branch: master Release: REL_16_BR [9c6ad5eaa] 2022-11-22 14:40:20 -0500
Branch: REL_15_STABLE Release: REL_15_2 [2debceed2] 2022-11-22 14:40:44 -0500
Branch: REL_14_STABLE Release: REL_14_7 [bd06fe4de] 2022-11-22 14:40:45 -0500
Branch: REL_13_STABLE Release: REL_13_10 [6e639267a] 2022-11-22 14:40:45 -0500
Branch: REL_12_STABLE Release: REL_12_14 [ec10b6139] 2022-11-22 14:40:45 -0500
Branch: REL_11_STABLE Release: REL_11_19 [b96a096db] 2022-11-22 14:40:46 -0500

    YA attempt at taming worst-case behavior of get_actual_variable_range.

    We've made multiple attempts at preventing get_actual_variable_range
    from taking an unreasonable amount of time (3ca930fc3, fccebe421).
    But there's still an issue for the very first planning attempt after
    deletion of a large number of extremal-valued tuples.  While that
    planning attempt will set "killed" bits on the tuples it visits and
    thereby reduce effort for next time, there's still a lot of work it
    has to do to visit the heap and then set those bits.  It's (usually?)
    not worth it to do that much work at plan time to have a slightly
    better estimate, especially in a context like this where the table
    contents are known to be mutating rapidly.

    Therefore, let's bound the amount of work to be done by giving up
    after we've visited 100 heap pages.  Giving up just means we'll
    fall back on the extremal value recorded in pg_statistic, so it
    shouldn't mean that planner estimates suddenly become worthless.

    Note that this means we'll still gradually whittle down the problem
    by setting a few more index "killed" bits in each planning attempt;
    so eventually we'll reach a good state (barring further deletions),
    even in the absence of VACUUM.

    Simon Riggs, per a complaint from Jakub Wartak (with cosmetic
    adjustments by me).  Back-patch to all supported branches.

    Discussion: https://postgr.es/m/CAKZiRmznOwi0oaV=4PHOCM4ygcH4MgSvt8=5cu_vNCfc8FSUug@mail.gmail.com

As noted, that did make it into the 11.x branch, but not till 11.19.

            regards, tom lane



pgsql-general by date:

Previous
From: Adrian Klaver
Date:
Subject: Re: creating a subset DB efficiently ?
Next
From: Shaheed Haque
Date:
Subject: Re: creating a subset DB efficiently ?