Em sex, 1 de nov de 2019 às 03:42, Zhenghua Lyu <zlv@pivotal.io> escreveu:
>
> My issue: I did some spikes and tests on TPCDS 1TB Bytes data. For query 104, it generates
> nestloop join even with enable_nestloop set off. And the final plan's total cost is very huge (about 1e24). But If I
enlargethe disable_cost to 1e30, then, planner will generate hash join.
>
> So I guess that disable_cost is not large enough for huge amount of data.
>
> It is tricky to set disable_cost a huge number. Can we come up with better solution?
>
Isn't it a case for a GUC disable_cost? As Thomas suggested, DBL_MAX
upper limit should be sufficient.
> The following thoughts are from Heikki:
>>
>> Aside from not having a large enough disable cost, there's also the fact that the high cost might affect the
restof the plan, if we have to use a plan type that's disabled. For example, if a table doesn't have any indexes, but
enable_seqscanis off, we might put the unavoidable Seq Scan on different side of a join than we we would with
enable_seqscan=on,because of the high cost estimate.
>
>
>>
>> I think a more robust way to disable forbidden plan types would be to handle the disabling in add_path(). Instead of
havinga high disable cost on the Path itself, the comparison add_path() would always consider disabled paths as more
expensivethan others, regardless of the cost.
>
I'm afraid it is not as cheap as using diable_cost as a node cost. Are
you proposing to add a new boolean variable in Path struct to handle
those cases in compare_path_costs_fuzzily?
--
Euler Taveira Timbira -
http://www.timbira.com.br/
PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento