Tom Lane <tgl@sss.pgh.pa.us> writes:
> I'm not sure what to do about this. I think that things might work
> out better if we redefined the startup cost as "estimated cost to
> retrieve the first tuple", rather than its current very-squishy
> definition as "cost to initialize the scan".
Actually I wanted to raise this question very long time ago when I
read the code, but I don't have a example to prove it can cause any real
impact, then I didn't ask it.
startup_cost is defined by the cost to retrieve the *first tuple*, so
for the query like "SELECT * FROM t WHERE foo", the IO cost to retrieve
the first tpule is obviously not 0. (I think it can be total_cost /
rows?) at the same time, the startup_cost of IndexScan is more
restricted, it counts the IO blocks from root -> leaf nodes. I think
there is a inconsistent issue as well.
> That would end up
> with the LIMIT node having a cost that's at least the sum of the
> startup costs of the input scans, which would fix this problem.
great to know this.
> But changing that everywhere would be a lotta work.
In my understanding, the only place we need to change is the
startup_cost in cost_seqscan, I must be wrong now, but I want to know
where is it.
> and I'm far from sure that it would not have any negative
> side-effects.
Yes, I think it is a semantics correct than before however.
--
Best Regards
Andy Fan