On Mon, 29 Sep 2003, Tom Lane wrote:
> Stephan Szabo <sszabo@megazone.bigpanda.com> writes:
> >> Hm. Don't suppose you were using EXPLAIN ANALYZE so we could see what's
> >> happening? This is clearly a planner failure, although I'm unsure if we
> >> can expect the planner to get the right answer with no pg_statistic entries.
>
> > The left join one seems to give me values like the following:
>
> There are some fishy row estimates in here:
>
> > -> Index Scan using pktest_a_key on pktest (cost=0.00..52.00
> > rows=1000 width=8) (actual time=17.82..1609.97 rows=10000 loops=1)
>
> The system definitely should be expected to have the accurate row count
> for the PK table, since an index should have been created on it (and we
> do do that after loading the data, no?). It is possible that it'd have
> the default 1000 estimate for the FK table, if there are no indexes at
> all on the FK table; otherwise it should have the right number. It's
> not real clear to me what conditions you're testing under, but the
> estimates in the plans you're quoting aren't consistent ...
Well, they're all from the same load of the same data with only stopping
and starting in between, but I did make the index on the pk table first
loaded the data and then built the fk table index ( because I'd wanted to
try without the index as well), which meant that it wouldn't match the
behavior of a dump. Ugh, I'd forgotten that the primary key didn't get
created until later too.
Okay, that's much better:Hash Left Join (cost=203.00..1487869.29 rows=49501250 width=4) (actual
time=611632.67..611632.67 rows=0 loops=1) Hash Cond: (("outer".b = "inner".a) AND ("outer".c = "inner".b)) Filter:
("inner".aIS NULL) -> Seq Scan on fktest (cost=0.00..745099.00 rows=49501250 width=8)
(actual time=0.01..169642.48 rows=50000000 loops=1) Filter: ((b IS NOT NULL) AND (c IS NOT NULL)) -> Hash
(cost=152.00..152.00rows=10000 width=8) (actual
time=46.04..46.04 rows=0 loops=1) -> Seq Scan on pktest (cost=0.00..152.00 rows=10000 width=8)
(actual time=0.02..21.38 rows=10000 loops=1)Total runtime: 611632.95 msec
(8 rows)
That's much better. :) As long as the row estimates are reasonable it
seems to be okay, but I do wonder why it chose the merge join for the case
when it thought there was only 1000 rows though.