Re: Nested loops overpriced - Mailing list pgsql-performance

From Peter Eisentraut
Subject Re: Nested loops overpriced
Date
Msg-id 200705101730.22883.peter_e@gmx.net
Whole thread Raw
In response to Re: Nested loops overpriced  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
Am Mittwoch, 9. Mai 2007 19:40 schrieb Tom Lane:
> I remember having dithered about whether
> to try to avoid counting the same physical relation more than once in
> total_table_pages, but this example certainly suggests that we
> shouldn't.  Meanwhile, do the estimates get better if you set
> effective_cache_size to 1GB or so?

Yes, that makes the plan significantly cheaper (something like 500,000 instead
of 5,000,000), but still a lot more expensive than the hash join (about
100,000).

> To return to your original comment: if you're trying to model a
> situation with a fully cached database, I think it's sensible
> to set random_page_cost = seq_page_cost = 0.1 or so.  You had
> mentioned having to decrease them to 0.02, which seems unreasonably
> small to me too, but maybe with the larger effective_cache_size
> you won't have to go that far.

Heh, when I decrease these parameters, the hash join gets cheaper as well.  I
can't actually get it to pick the nested-loop join.

--
Peter Eisentraut
http://developer.postgresql.org/~petere/

pgsql-performance by date:

Previous
From: Susan Russo
Date:
Subject: Re: REVISIT specific query (not all) on Pg8 MUCH slower than Pg7
Next
From: Peter Eisentraut
Date:
Subject: Re: Nested loops overpriced