Tom,
* Tom Lane (tgl@sss.pgh.pa.us) wrote:
> I've got the same problem with this that I do with the recently-proposed
> patch to fail queries with estimated cost > X --- to wit, I think it
> will result in a net *reduction* in system reliability not an improvement.
> Any such feature changes the planner estimates from mere heuristics into
> a gating factor that will make queries fail entirely. And they are
> really not good enough to put that kind of trust into.
Perhaps instead then have the system fail the query once it's gone
beyond some configurable limit on temporary disk usage? The query still
would have run for a while but it wouldn't have run the partition out of
space and would have come back faster at least.
Comparing this to work_mem- do we do something like this there? I don't
think we do, which means we're trusting the planner's estimate to get
the memory size estimate right and that can end up being way off
resulting in queries taking up well beyond what work_mem would normally
allow them... I recall alot of discussion but don't recall if anything
was actually done to resolve that issue either.
It seems to me we should probably: not trust the planner's estimates and
therefore implement checks to fail things once we've gone well beyond
what we expected to use. If we've done this for work_mem then using
whatever we did there for a 'temporary disk space limit' would at least
make me happy. If we havn't then perhaps we should do something for
both.
Thanks!
Stephen