Re: Changing types of block and chunk sizes in memory contexts - Mailing list pgsql-hackers

From David Rowley
Subject Re: Changing types of block and chunk sizes in memory contexts
Date
Msg-id CAApHDvqcPZrPhV7+HSxnoJ_vjTjfm8NBwdJk7JwF8-gXE8or=A@mail.gmail.com
Whole thread Raw
In response to Re: Changing types of block and chunk sizes in memory contexts  (Peter Eisentraut <peter@eisentraut.org>)
Responses Re: Changing types of block and chunk sizes in memory contexts  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Wed, 28 Jun 2023 at 20:13, Peter Eisentraut <peter@eisentraut.org> wrote:
> size_t (= Size) is the correct type in C to store the size of an object
> in memory.  This is partially a self-documentation issue: If I see
> size_t in a function signature, I know what is intended; if I see
> uint32, I have to wonder what the intent was.

Perhaps it's ok to leave the context creation functions with Size
typed parameters and then just Assert the passed-in sizes are not
larger than 1GB within the context creation function.  That way we
could keep this change self contained in the .c file for the given
memory context.  That would mean there's no less readability. If we
ever wanted to lift the 1GB limit on block sizes then we'd not need to
switch the function signature again. There's documentation where the
struct's field is declared, so having a smaller type in the struct
itself does not seem like a reduction of documentation quality.

> You could make an argument that using shorter types would save space for
> some internal structs, but then you'd have to show some more information
> where and why that would be beneficial.

I think there's not much need to go proving this speeds something up.
There's just simply no point in the struct fields being changed in
Melih's patch to be bigger than 32 bits as we never need to store more
than 1GB in them.  Reducing these down means we may have to touch
fewer cache lines and we'll also have more space on the keeper blocks
to store allocations.  Memory allocation performance is fairly
fundamental to Postgres's performance. In my view, we shouldn't have
fields that are twice as large as they need to be in code as hot as
this.

> Absent any strong performance argument, I don't see the benefit of this
> change.  People might well want to experiment with MEMORYCHUNK_...
> settings larger than 1GB.

Anyone doing so will be editing C code anyway.  They can adjust these
fields then.

David



pgsql-hackers by date:

Previous
From: Zhang Mingli
Date:
Subject: Re: Targetlist lost when CTE join
Next
From: Peter Eisentraut
Date:
Subject: Re: Assistance Needed: Issue with pg_upgrade and --link option