Re: Make MemoryContextMemAllocated() more precise - Mailing list pgsql-hackers

From Jeff Davis
Subject Re: Make MemoryContextMemAllocated() more precise
Date
Msg-id ddb5d6e07e66dee4f21760aaa972aa8bb7462ea1.camel@j-davis.com
Whole thread Raw
In response to Re: Make MemoryContextMemAllocated() more precise  (Tomas Vondra <tomas.vondra@2ndquadrant.com>)
Responses Re: Make MemoryContextMemAllocated() more precise
List pgsql-hackers
On Thu, 2020-03-19 at 19:11 +0100, Tomas Vondra wrote:
> AFAICS the 2x allocation is the worst case, because it only happens
> right after allocating a new block (of twice the size), when the
> "utilization" drops from 100% to 50%. But in practice the utilization
> will be somewhere in between, with an average of 75%.

Sort of. Hash Agg is constantly watching the memory, so it will
typically spill right at the point where the accounting for that memory
context is off by 2X. 

That's mitigated because the hash table itself (the array of
TupleHashEntryData) ends up allocated as its own block, so does not
have any waste. The total (table mem + out of line) might be close to
right if the hash table array itself is a large fraction of the data,
but I don't think that's what we want.

>  And we're not
> doubling the block size indefinitely - there's an upper limit, so
> over
> time the utilization drops less and less. So as the contexts grow,
> the
> discrepancy disappears. And I'd argue the smaller the context, the
> less
> of an issue the overcommit behavior is.

The problem is that the default work_mem is 4MB, and the doubling
behavior goes to 8MB, so it's a problem with default settings.

Regards,
    Jeff Davis





pgsql-hackers by date:

Previous
From: Bruce Momjian
Date:
Subject: Re: [PATCH] pg_upgrade: report the reason for failing to open thecluster version file
Next
From: Robert Haas
Date:
Subject: Re: Make MemoryContextMemAllocated() more precise