Re: merge>hash>loop - Mailing list pgsql-performance

From Tom Lane
Subject Re: merge>hash>loop
Date
Msg-id 20064.1145403513@sss.pgh.pa.us
Whole thread Raw
In response to Re: merge>hash>loop  (Markus Schaber <schabi@logix-tt.com>)
List pgsql-performance
Markus Schaber <schabi@logix-tt.com> writes:
> An easy first approach would be to add a user tunable cache probability
> value to each index (and possibly table) between 0 and 1. Then simply
> multiply random_page_cost with (1-that value) for each scan.

That's not the way you'd need to use it.  But on reflection I do think
there's some merit in a "cache probability" parameter, ranging from zero
(giving current planner behavior) to one (causing the planner to assume
everything is already in cache from prior queries).  We'd have to look
at exactly how such an assumption should affect the cost equations ...

            regards, tom lane

pgsql-performance by date:

Previous
From: "Jim C. Nasby"
Date:
Subject: Re: merge>hash>loop
Next
From: "patrick keshishian"
Date:
Subject: Planner doesn't chose Index - (slow select)