Marcus "Mike" Mascari wrote:
> What I would like to see (perhaps its possible
> right now, or a workaround is available) is a method
> for yielding an elog() message when the number of
> tuples to be returned is over a specified USER limit.
>
> [...]
>
> However, what we would like to prevent is some
> new user, unfamiliar with how to design aggregate
> and restrive queries, from doing a SELECT * FROM sales
> and having 1 gig of data shipped across the network
> to their Excel, Access, or Crystal Report Writer
> application. Instead, we would like to generate
> an elog() error in the backend like:
>
> "You are allowed to return a maximum of 20,000 rows."
I don't think that this is easy to implement because it is in
conflict with the general design of the PostgreSQL
optimizer/executor combo.
If possible (no sorting or grouping), the execution plan is
setup in a way that the data is directly fed from the table
to the client. So when the maximum exceeds, the client
already received that many rows.
And how should the database know how many rows it WILL send
before collecting them all? Well, in the case of a simple
"SELECT * FROM sales" it could look at the statistics. But on
a simple join between "salesorg" and "sales" it is only
possible to guess how many rows this MIGHT produce (the
optimizer uses this guessing to decide the joining and where
indices might be helpful). But the exact number of rows is
only known after the complete execution of the plan - and
then they are already sent to the client.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#========================================= wieck@debis.com (Jan Wieck) #