> Especially with very large tables, hearing the disks grind as Postgres scans
> every single row in order to determine the number of rows in a table or the
> max value of a column (even a primary key created from a sequence) is pretty
> painful. If the implementation is not too horrendous, this is an area where
> an orders-of-magnitude performance increase can be had.
Actually, it's very painful. For MySQL, they've accepted the concurrancy
hit in order to accomplish it -- PostgreSQL would require a more subtle
approach.
Anyway, with Rules you can force this:
ON INSERT UPDATE counter SET tablecount = tablecount + 1;
ON DELETE UPDATE counter SET tablecount = tablecount - 1;
You need to create a table "counter" with a single row that will keep
track of the number of rows in the table. Just remember, you've now
serialized all writes to the table, but in your situation it may be
worth while.
max(foo) optimizations requires an extension to the aggregates system.
It will likely happen within a few releases. A work around can be
accomplished today through the use of LIMIT and ORDER BY.