Re: Nested loop vs merge join: inconsistencies between estimated and actual time - Mailing list pgsql-performance

From Vlad Arkhipov
Subject Re: Nested loop vs merge join: inconsistencies between estimated and actual time
Date
Msg-id 47D5E5CC.7020403@dc.baikal.ru
Whole thread Raw
In response to Re: Nested loop vs merge join: inconsistencies between estimated and actual time  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
Tom Lane writes:
Vlad Arkhipov <arhipov@dc.baikal.ru> writes: 
I've came across this issue while writing report-like query for 2 not
very large tables. I've tried several methods to resolve this one (see
below). But now I'm really stuck...   
It looks like you are wishing to optimize for all-in-memory situations,
in which case the traditional advice is to reduce random_page_cost to
something close to 1.  AFAICS all the rowcount estimates you're seeing
are spot on, or as close to spot on as you could realistically hope for,
and so the problem lies with the cost parameters.  Fooling with the
statistics is not going to help if the rowcount estimates are already
good. 

I tried to change random_page_cost to 1.1 or something close to it and increase/decrease effective_cache_size. But Postgres always prefer plan with merge join.

pgsql-performance by date:

Previous
From: Miguel Arroz
Date:
Subject: Re: UPDATE 66k rows too slow
Next
From: "Robins Tharakan"
Date:
Subject: Re: count * performance issue