=?UTF-8?B?VG9yc3RlbiBGw7ZydHNjaA==?= <torsten.foertsch@gmx.net> writes:
> What I'm asking is the following. Assuming node without any filter has a
> startup cost C1, a total cost of C2 and produces N rows. Now, a filter
> is applied which passes through M rows. Then the startup cost for the
> node *with* the filter applied should be different from C1 because a
> certain amount of rows from the beginning is filtered out, right?
No. The model is that startup cost is what's expended before the scan can
start, and then the run cost (total_cost - startup_cost) is expended while
scanning. Applying a filter increases the run cost and also reduces the
number of rows returned, but that's got nothing to do with startup cost.
As a comparison point, imagine an index scan that has a filter condition
in addition to the indexable condition (which let's assume selects
multiple rows). The startup cost for such a plan corresponds to the index
descent costs. The run cost corresponds to scanning the index entries
matching the indexable condition, fetching the heap rows, and applying the
filter condition.
Or in other words, time to get the first result row is not just startup
cost; it's startup cost plus run_cost/N, if the plan is estimated to
return N rows altogether.
regards, tom lane