Thanks, Tom. You mean this bit, right?
-> Seq Scan on _test_pos (cost=0.00..10728.00 rows=1 width=4)
Filter:
((('0101000020E61000000000000000001C400000000000001C40'::geography &&
_st_expand(pos, 300000::double precision)) AND ...
I tried to find some info on selectivity estimation functions, but only
came up with
http://www.postgresql.org/docs/9.1/static/xoper-optimization.html which
talks about operators. Is there something similar for functions? Or does
the rows estimate come from the PostGIS && operator that's used
internally by ST_DWithin? Just trying to understand this better so I
know what to ask on the PostGIS list.
Thanks,
Evan
On 17/05/2012 12:31 AM, Tom Lane wrote:
> Evan Martin<postgresql@realityexists.net> writes:
>> I've run into a weird query performance problem. I have a large, complex
>> query which joins the results of several set-returning functions with
>> some tables and filters them by calling another function, which involves
>> PostGIS calls (ST_DWithin). This used to run in about 10 seconds until I
>> changed the functions to allow them to be inlined. (They previously had
>> "SET search_path FROM current", which prevented inlining.) Now the query
>> doesn't return in 10 minutes!
> You didn't show EXPLAIN ANALYZE results, but I see that one query is
> estimating that 6667 rows from _test_pos pass the filter, while the
> other thinks only 1 row passes; that changes the planner's ideas about
> how to do the join, and evidently not for the better. In the case of
> the opaque user-defined function, you're just getting a default
> selectivity estimate, and it's really just blind luck if that is close
> to reality. But in the other case it should be invoking
> PostGIS-provided selectivity estimation functions, and apparently those
> are giving poor results. I think you'd be best off to ask about that
> on the PostGIS mailing lists.
>
> regards, tom lane
>