Re: On disable_cost - Mailing list pgsql-hackers

From David Rowley
Subject Re: On disable_cost
Date
Msg-id CAApHDvoqYmnWJJJYnhqUias61d4A+-1_4r6NHY_5MisxFzebcA@mail.gmail.com
Whole thread Raw
In response to Re: On disable_cost  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: On disable_cost  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: On disable_cost  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On Wed, 13 Mar 2024 at 08:55, Robert Haas <robertmhaas@gmail.com> wrote:
> But in the absence of that, we need some way to privilege the
> non-disabled paths over the disabled ones -- and I'd prefer to have
> something more principled than disable_cost, if we can work out the
> details.

The primary place I see issues with disabled_cost is caused by
STD_FUZZ_FACTOR.  When you add 1.0e10 to a couple of modestly costly
paths, it makes them appear fuzzily the same in cases where one could
be significantly cheaper than the other. If we were to bump up the
disable_cost it would make this problem worse.

I think we do still need some way to pick the cheapest disabled path
when there are no other options.

One way would be to set fuzz_factor to 1.0 when either of the paths
costs in compare_path_costs_fuzzily() is >= disable_cost.
clamp_row_est() does cap row estimates at MAXIMUM_ROWCOUNT (1e100), so
I think there is some value of disable_cost that could almost
certainly ensure we don't reach it because the path is truly expensive
rather than disabled.

So maybe the fix could be to set disable_cost to something like
1.0e110 and adjust compare_path_costs_fuzzily to not apply the
fuzz_factor for paths >= disable_cost.   However, I wonder if that
risks the costs going infinite after a couple of cartesian joins.

David



pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: remaining sql/json patches
Next
From: Nathan Bossart
Date:
Subject: Re: un-revert the MAINTAIN privilege and the pg_maintain predefined role