In the last exciting episode, dvlink@yahoo.com (David Link) wrote:
> Why does adding SUM and GROUP BY destroy performance?
When you use SUM (or other aggregates), there are no short cuts to
walking through each and every tuple specified by the WHERE clause.
On some systems there are statistics mechanisms that can short-circuit
that. On PostgreSQL, the use of MVCC to let new data "almost
magically appear" :-) has the demerit, in the case of aggregates, of
not leaving much opening for short cuts.
There are some cases where you CAN do much better than the aggregates
do.
SELECT MAX(FIELD) FROM TABLE WHERE A='THIS' and B='THAT';
may be replaced with the likely-to-be-faster:
select field from table where a = 'THIS' and b='THAT' order by field
desc limit 1;
MIN() admits a similar rewriting. If there is an index on FIELD, this
will likely be _way_ faster than using MIN()/MAX().
In a sense, it's not that aggregates "destroy" performance; just that
there are no magical shortcuts to make them incredibly fast.
--
wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org').
http://www.ntlug.org/~cbbrowne/multiplexor.html
"And 1.1.81 is officially BugFree(tm), so if you receive any bug
reports on it, you know they are just evil lies." -- Linus Torvalds