Re: cal I pass arguments directly to final function in aggregates - Mailing list pgsql-general

From Nicklas Avén
Subject Re: cal I pass arguments directly to final function in aggregates
Date
Msg-id 519A7837.3000808@jordogskog.no
Whole thread Raw
In response to Re: cal I pass arguments directly to final function in aggregates  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Thank you Tom

On 05/19/2013 01:26 AM, Tom Lane wrote:
> =?UTF-8?B?Tmlja2xhcyBBdsOpbg==?= <nicklas.aven@jordogskog.no> writes:
>
> Perhaps you could construct your usage like this:
>
>     post_process_function(aggregate_function(...), fixed_argument)
>
> where the aggregate_function just collects the varying values
> and then the post_process_function does what you were thinking
> of as the final function.
>
>

Maybe that is the way I have to go. But I would like to avoid it because
I think the interface gets a bit less clean for the users.

I also suspect that it causes some more memcopying to get out of the
aggregation function and into a new function. (Am I right about that)

As i understand it i have two options

1)    Do as you suggest and divide the process in one aggregate function
and one post_processing
2    Contruct a structure for the state-value that can hold those
values. In this case those arguments is just 1 smallint , and 1 char(3).
I will just have to handle them for the first row to store them in my
structure, then I can just ignore them. Am I right that it will be a
very small overhead even if those values are sent to the function for
each row?

My question is if I can get further advice about what bottlenecks and
traps I should consider.

What I am aggregating is geometries (PostGIS). It can be 1 to millions
of rows, and the geometries can be points of a few bytes to complex
geometry-collections of many megabytes.

Regards

/Nicklas


pgsql-general by date:

Previous
From: Amit Langote
Date:
Subject: Re: Why does row estimation on nested loop make no sense to me
Next
From: Dev Kumkar
Date:
Subject: Re: ODBC constructs