Re: Reducing the chunk header sizes on all memory context types - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Reducing the chunk header sizes on all memory context types
Date
Msg-id 209211.1661991821@sss.pgh.pa.us
Whole thread Raw
In response to Re: Reducing the chunk header sizes on all memory context types  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
Responses Re: Reducing the chunk header sizes on all memory context types
Re: Reducing the chunk header sizes on all memory context types
List pgsql-hackers
Tomas Vondra <tomas.vondra@enterprisedb.com> writes:
> Focusing on the aset, vast majority of allocations (60M out of 64M) is
> small enough to use power-of-2 logic, and we go from 6.3GB to 8.2GB, so
> ~30%. Not great, not terrible.

Not sure why this escaped me before, but I remembered another argument
for not forcibly adding space for a sentinel: if you don't have room,
that means the chunk end is up against the header for the next chunk,
which means that any buffer overrun will clobber that header.  So we'll
detect the problem anyway if we validate the headers to a reasonable
extent.

The hole in this argument is that the very last chunk allocated in a
block might have no following chunk to validate.  But we could probably
special-case that somehow, maybe by laying down a sentinel in the free
space, where it will get overwritten by the next chunk when that does
get allocated.

30% memory bloat seems like a high price to pay if it's adding negligible
detection ability, which it seems is true if this argument is valid.
Is there reason to think we can't validate headers enough to catch
clobbers?

            regards, tom lane



pgsql-hackers by date:

Previous
From: Peter Smith
Date:
Subject: Re: Handle infinite recursion in logical replication setup
Next
From: David Rowley
Date:
Subject: Re: Reducing the chunk header sizes on all memory context types