Re: [HACKERS] to-do item for explain analyze of hash aggregates? - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: [HACKERS] to-do item for explain analyze of hash aggregates?
Date
Msg-id 2527f5cb-5992-ae66-f3ec-4aa2396065ec@2ndquadrant.com
Whole thread Raw
In response to Re: [HACKERS] to-do item for explain analyze of hash aggregates?  (Andres Freund <andres@anarazel.de>)
Responses Re: [HACKERS] to-do item for explain analyze of hash aggregates?  (Jeff Janes <jeff.janes@gmail.com>)
Re: [HACKERS] to-do item for explain analyze of hash aggregates?  (Andres Freund <andres@anarazel.de>)
List pgsql-hackers
On 04/24/2017 08:52 PM, Andres Freund wrote:
> On 2017-04-24 11:42:12 -0700, Jeff Janes wrote:
>> The explain analyze of the hash step of a hash join reports something like
>> this:
>>
>>    ->  Hash  (cost=458287.68..458287.68 rows=24995368 width=37) (actual
>> rows=24995353 loops=1)
>>          Buckets: 33554432  Batches: 1  Memory Usage: 2019630kB
>>
>>
>> Should the HashAggregate node also report on Buckets and Memory Usage?  I
>> would have found that useful several times.  Is there some reason this is
>> not wanted, or not possible?
>
> I've wanted that too.  It's not impossible at all.
>

Why wouldn't that be possible? We probably can't use exactly the same 
approach as Hash, because hashjoins use custom hash table while hashagg 
uses dynahash IIRC. But why couldn't measure the amount of memory by 
looking at the memory context, for example?

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: Andres Freund
Date:
Subject: Re: [HACKERS] StandbyRecoverPreparedTransactions recovers subtranslinks incorrectly
Next
From: Jeevan Ladhe
Date:
Subject: Re: [HACKERS] DELETE and UPDATE with LIMIT and ORDER BY