On 14 November 2014 07:37, Jim Nasby <Jim.Nasby@bluetreble.com> wrote:
> On 11/12/14, 1:54 AM, David Rowley wrote:
>>
>> On Tue, Nov 11, 2014 at 9:29 PM, Simon Riggs <simon@2ndquadrant.com
>> <mailto:simon@2ndquadrant.com>> wrote:
>>
>>
>> This plan type is widely used in reporting queries, so will hit the
>> mainline of BI applications and many Mat View creations.
>> This will allow SELECT count(*) FROM foo to go faster also.
>>
>> We'd also need to add some infrastructure to merge aggregate states
>> together for this to work properly. This means that could also work for
>> avg() and stddev etc. For max() and min() the merge functions would likely
>> just be the same as the transition functions.
>
>
> Sanity check: what % of a large aggregate query fed by a seqscan actually
> spent in the aggregate functions? Even if you look strictly at CPU cost,
> isn't there more code involved to get data to the aggregate function than in
> the aggregation itself, except maybe for numeric?
Yes, which is why I suggested pre-aggregating before collecting the
streams together.
The point is not that the aggregation is expensive, its that the
aggregation eats data and the required bandwidth for later steps is
reduced and hence does not then become a bottleneck that renders the
parallel Seq Scan ineffective.
-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services