On 2015-02-08 09:56, Shay Rojansky wrote:
>>> More to the point, doesn't max_rows=1 have exactly the same dangers as
>>> LIMIT 1? The two seem to be identical, except that one is expressed in
> the
>>> SQL query and the other at the network protocol level.
>
>> The planner does not have access to network protocol level? options while
>> it does know about LIMIT.
>
> That's an internal PostgreSQL matter (which granted, may impact efficiency).
> My comment about max_rows being equivalent to LIMIT was meant to address
> Marko's
> argument that max_rows is dangerous because any row might come out and tests
> may pass accidentally (but that holds for LIMIT 1 as well, doesn't it).
The point is that then the user gets to choose the behavior. LIMIT 1
without ORDER BY is very explicitly telling the reader of the code
"there might be more than one row returned by this query, but I'm okay
with getting only one of them, whichever it is". And when the LIMIT 1 is *not* there, you get the driver
automaticallychecking your queries
for sanity. If the driver always throws away the rows after the first
one, it's difficult to go to behavior of enforcing that no more than one
row was returned.
Anyway, like you said somewhere upthread, the interface the driver
you're working on promises to implement right now can't be changed due
to backwards compatibility concerns. But I see new interfaces being
created all the time, and they all make this same mistake.
.m