Am Mittwoch, 9. Mai 2007 19:40 schrieb Tom Lane:
> I remember having dithered about whether
> to try to avoid counting the same physical relation more than once in
> total_table_pages, but this example certainly suggests that we
> shouldn't. Meanwhile, do the estimates get better if you set
> effective_cache_size to 1GB or so?
Yes, that makes the plan significantly cheaper (something like 500,000 instead
of 5,000,000), but still a lot more expensive than the hash join (about
100,000).
> To return to your original comment: if you're trying to model a
> situation with a fully cached database, I think it's sensible
> to set random_page_cost = seq_page_cost = 0.1 or so. You had
> mentioned having to decrease them to 0.02, which seems unreasonably
> small to me too, but maybe with the larger effective_cache_size
> you won't have to go that far.
Heh, when I decrease these parameters, the hash join gets cheaper as well. I
can't actually get it to pick the nested-loop join.
--
Peter Eisentraut
http://developer.postgresql.org/~petere/