On Wed, 2005-05-11 at 12:53 +0800, Christopher Kings-Lynne wrote:
> > Another trick you can use with large data sets like this when you want
> > results
> > back in seconds is to have regularly updated tables that aggregate the data
> > along each column normally aggregated against the main data set.
>
> > Maybe some bright person will prove me wrong by posting some working
> > information about how to get these apparently absent features working.
>
> Most people just use simple triggers to maintain aggregate summary tables...
Agreed. I've also got a view which calls a function that will 1) use the
summary table where data exists, or 2) calculate the summary
information, load it into summary table, and send a copy to the client
(partial query results cache).
It's not all nicely abstracted behind user friendly syntax, but most of
those features can be cobbled together (with effort) in PostgreSQL.
--