> The main thing that jumps out from the profiles is that on these complex
> searches, the optimizer spends practically *all* of its time down inside
> better_path, scanning the list of already known access paths to see if a
> proposed new path is a duplicate of one already known. (Most of the time
> it is not, as shown by the small number of times path_is_cheaper gets
> called from better_path.) This scanning is O(N^2) in the number of paths
> considered, whereas it seems that all the other work is no worse than O(N)
> in the number of paths. The bottom of the duplicate check is equal(),
> which as you can see is getting called a horrendous number of times.
The ironic thing about add_pathlist() is that the slowness is due to the
code using a nested loop to merge duplicate RelOptInfo paths, rather
than some sort of mergejoin.
I will keep studying it, but from your comments, I can see you
understood the code much better than I had. I am just understanding
your conclusions.
-- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610)
853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill,
Pennsylvania19026