Re: Aggregate "rollup" - Mailing list pgsql-hackers

From Greg Stark
Subject Re: Aggregate "rollup"
Date
Msg-id 87k7fd6sp9.fsf@stark.dyndns.tv
Whole thread Raw
In response to Aggregate "rollup"  (mlw <pgsql@mohawksoft.com>)
List pgsql-hackers
mlw <pgsql@mohawksoft.com> writes:

> I had written a piece of code about two years ago that used the aggregate
> feature of PostgreSQL to create an array of integers from an aggregate, as:
> 
> select int_array_aggregate( column ) from table group by column

I found this and am using it extensively. It's extremely helpful, thank you.

It goes well with either the *= operators in contrib/array or the gist
indexing in contrib/intarray.

One problem I've found though is that the optimizer doesn't have any good
statistics for estimating the number of matches of such operators. It seems
like fixing that would require a lot of changes to the statistics gathered.

> create table fast_lookup as select reference, aggregate_array( field ) from
> table group by field
> 
> Where the function aggregate_array takes any number of data types.

Sure, that seems logical. Actually I already bumped into a situation where I
wanted an array of char(1). I just kludged it to use ascii() of that first
character, but it would be cleaner and perhaps better for unicode later to use
the actual character.

Someone else on the list already asked for an function that gave an array of
varchar. I think they were pointed at a general purpose function from plr.

--
greg



pgsql-hackers by date:

Previous
From: Alan Gutierrez
Date:
Subject: Re: XML ouput for psql
Next
From: Justin Clift
Date:
Subject: Re: Who puts the Windows binaries on the FTP server?