Excerpts from Dimitri Fontaine's message of vie nov 12 17:13:59 -0300 2010:
> Tom Lane <tgl@sss.pgh.pa.us> writes:
> > WITH t AS (DELETE FROM foo RETURNING *)
> > SELECT * FROM t LIMIT 1;
> >
> > How many rows does this delete? I think we concluded that we should
> > force the DELETE to be run to conclusion even if the outer query didn't
> > read it all
>
> The counter-example that jumps to mind is unix pipes. It's read-only at
> the consumer level but as soon as you stop reading, the producer stops.
> I guess that's only talking about the surprise factor, though.
It's not that straighforward though, in that the producer could stop a
bit ahead of what the consumer reads, due to there being a buffer in the
middle. Witness this simple example
$ cat > producer
#!/bin/sh
for i in `seq 1 1000`; do echo $i >> /tmp/mylog echo $i
done
$ chmod a+x producer
$ ./producer | head -5
1
2
3
4
5
$ cat /tmp/mylog
1
2
3
4
5
6
7
I certainly wouldn't want our implementation to behave like this.
> I'm not sure how far we go with the SIGPIPE analogy, but I wanted to say
> that maybe that would not feel so strange to some people if the DELETE
> were not run to completion but only until the reader is done.
>
> What about this one:
>
> WITH d AS (DELETE FROM foo RETURNING id),
> q AS (INSERT INTO queue SELECT 'D', id FROM d)
> SELECT * FROM q ORDER BY id LIMIT 10;
Personally I find this one less surprising:
WITH d AS (DELETE FROM foo LIMIT 10 RETURNING id), q AS (INSERT INTO queue SELECT 'D', id FROM d)SELECT * FROM q
ORDERBY id;
--
Álvaro Herrera <alvherre@commandprompt.com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support