Re: [HACKERS] to-do item for explain analyze of hash aggregates? - Mailing list pgsql-hackers

From Jeff Janes
Subject Re: [HACKERS] to-do item for explain analyze of hash aggregates?
Date
Msg-id CAMkU=1waOykv0z6XXp_xPeqz+UBYshrc9=gHN5pfHrHQj0+NUA@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] to-do item for explain analyze of hash aggregates?  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Responses Re: [HACKERS] to-do item for explain analyze of hash aggregates?  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Re: [HACKERS] to-do item for explain analyze of hash aggregates?  (Andres Freund <andres@anarazel.de>)
List pgsql-hackers
On Mon, Apr 24, 2017 at 12:13 PM, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
On 04/24/2017 08:52 PM, Andres Freund wrote:
On 2017-04-24 11:42:12 -0700, Jeff Janes wrote:
The explain analyze of the hash step of a hash join reports something like
this:

   ->  Hash  (cost=458287.68..458287.68 rows=24995368 width=37) (actual
rows=24995353 loops=1)
         Buckets: 33554432  Batches: 1  Memory Usage: 2019630kB


Should the HashAggregate node also report on Buckets and Memory Usage?  I
would have found that useful several times.  Is there some reason this is
not wanted, or not possible?

I've wanted that too.  It's not impossible at all.


Why wouldn't that be possible? We probably can't use exactly the same approach as Hash, because hashjoins use custom hash table while hashagg uses dynahash IIRC. But why couldn't measure the amount of memory by looking at the memory context, for example?

He said "not impossible", meaning it is possible.

I've added it to the wiki Todo page.  (Hopefully that has not doomed it to be forgotten about)

Cheers,

Jeff

pgsql-hackers by date:

Previous
From: Nikolay Shaplov
Date:
Subject: Re: [HACKERS] pgbench tap tests & minor fixes
Next
From: Tomas Vondra
Date:
Subject: Re: [HACKERS] to-do item for explain analyze of hash aggregates?