Dimitri Fontaine <dimitri@2ndQuadrant.fr> writes:
> Tom Lane <tgl@sss.pgh.pa.us> writes:
>> WITH t AS (DELETE FROM foo RETURNING *)
>> SELECT * FROM t LIMIT 1;
>>
>> How many rows does this delete? I think we concluded that we should
>> force the DELETE to be run to conclusion even if the outer query didn't
>> read it all
> The counter-example that jumps to mind is unix pipes. It's read-only at
> the consumer level but as soon as you stop reading, the producer stops.
> I guess that's only talking about the surprise factor, though.
> I'm not sure how far we go with the SIGPIPE analogy, but I wanted to say
> that maybe that would not feel so strange to some people if the DELETE
> were not run to completion but only until the reader is done.
I can see that there's a fair argument for that position in cases like
the above, but the trouble is that there are also cases where it's very
hard for the user to predict how many rows will be read. As examples,
mergejoins may stop short of reading all of one input depending on what
the last key value is from the other, and semijoins or antijoins will
stop whenenever they hit a match in the inner input. I think in the
join cases we had better establish a simple rule "it'll get executed
to completion". We could maybe do things differently if the outer
query is non-join with a LIMIT, but that seems pretty inconsistent.
regards, tom lane