Re: Improve monitoring of shared memory allocations - Mailing list pgsql-hackers

From Rahila Syed
Subject Re: Improve monitoring of shared memory allocations
Date
Msg-id CAH2L28tzCFEk2bxQ+oYv6zda=LFLfd_9cmq7HzsT4nj9KN1Yvg@mail.gmail.com
Whole thread Raw
In response to Re: Improve monitoring of shared memory allocations  (Tomas Vondra <tomas@vondra.me>)
Responses Re: Improve monitoring of shared memory allocations
List pgsql-hackers
Hi Tomas,


1) alignment

There was a comment with a question whether we need to MAXALIGN the
chunks in dynahash.c, which were originally allocated by ShmemAlloc, but
now it's part of one large allocation, which is then cut into pieces
(using pointer arithmetics).

I was not sure whether we need to enforce some alignment, we briefly
discussed that off-list. I realize you chose to add the alignment, but I
haven't noticed any comment in the patch why it's needed, and it seems
to me it may not be quite correct.
 

I have added MAXALIGN to specific allocations, such as HASHHDR and
HASHSEGMENT, with the expectation that allocations in multiples of this,
like dsize * HASHSEGMENT, would automatically align.



Let me explain what I had in mind, and why I think the way v5 doesn't
actually do that. It took me a while before I understood what alignment
is about, and for a while it was haunting my patches, so hopefully this
will help others ...

The "alignment" is about pointers (or addresses), and when a pointer is
aligned it means the address is a multiple of some number. For example
4B-aligned pointer is a multiple of 4B, so 0x00000100 is 4B-aligned,
while 0x00000101 is not. Sometimes we use data types to express the
alignment, e.g. int-aligned is 4B-aligned, but that's a detail. AFAIK
the alignment is always 2^k, so 1, 2, 4, 8, ...

The primary reason for alignment is that some architectures require the
pointers to be well-aligned for a given data type. For example (int*)
needs to be int-aligned. If you have a pointer that's not 4B-aligned,
it'll trigger SIGBUS or maybe SIGSEGV. This was true for architectures
like powerpc, I don't think x86/arm64 have this restriction, i.e. it'd
work, even if there might be a minor performance impact. Anyway, we
still enforce/expect correct alignment, because we may still support
some of those alignment-sensitive platforms, and it's "tidy".

The other reason is that we sometimes use alignment to add padding, to
reduce contention when accessing elements in hot arrays. We want to
align to cacheline boundaries, so that a struct does not require
accessing more cachelines than really necessary. And also to reduce
contention - the more cachelines, the higher the risk of contention.

 
Thank you for your explanation. I had a similar understanding. However,
I believed that MAXALIGN and CACHEALIGN are primarily performance optimizations
that do not impact the correctness of the code. This assumption is based on the fact
that I have not observed any failures on GitHub CI, even when changing the alignment
in this part of the code.


Now, back to the patch. The code originally did this in ShmemInitStruct

    hashp = ShmemInitStruct(...)

to allocate the hctl, and then

    firstElement = (HASHELEMENT *) ShmemAlloc(nelem * elementSize);

in element_alloc(). But this means the "elements" allocation is aligned
to PG_CACHE_LINE_SIZE, i.e. 128B, because ShmemAllocRaw() does this:

    size = CACHELINEALIGN(size);

So it distributes memory in multiples of 128B, and I believe it starts
at a multiple of 128B.

But the patch reworks this to allocate everything at once, and thus it
won't get this alignment automatically. AFAIK that's not intentional,
because no one explicitly mentioned this. And it's may not be quite
desirable, judging by the comment in ShmemAllocRaw().


Yes, the patch reworks this to allocate all the shared memory at once.
It uses ShmemInitStruct which internally calls ShmemAllocRaw. So the whole chunk
of memory allocated is still CACHEALIGNed. 

I mentioned v5 adds alignment, but I think it does not quite do that
quite correctly. It adds alignment by changing the macros from:

+#define HASH_ELEMENTS_OFFSET(hctl, nsegs) \
+       (sizeof(HASHHDR) + \
+        ((hctl)->dsize * sizeof(HASHSEGMENT)) + \
+        ((hctl)->ssize * (nsegs) * sizeof(HASHBUCKET)))

to

+#define HASH_ELEMENTS_OFFSET(hctl, nsegs) \
+       (MAXALIGN(sizeof(HASHHDR)) + \
+        ((hctl)->dsize * MAXALIGN(sizeof(HASHSEGMENT))) + \
+        ((hctl)->ssize * (nsegs) * MAXALIGN(sizeof(HASHBUCKET))))

First, it uses MAXALIGN, but that's mostly my fault, because my comment
suggested that - the ShmemAllocRaw however and makes the case for using
CACHELINEALIGN.

Good catch. For a shared hash table, allocations need to be
CACHELINEALIGNED.  
I think hash_get_init_size does not need to call CACHELINEALIGNED
explicitly as ShmemInitStruct already does this.
In that case, the size returned by hash_get_init_size just needs to
MAXALIGN required structs as per hash_create() requirements and CACHELINEALIGN
will be taken care of in ShmemInitStruct at the time of allocating the entire chunk.


But more importantly, it adds alignment to all hctl field, and to every
element of those arrays. But that's not what the alignment was supposed
to do - it was supposed to align arrays, not individual elements. Not
only would this waste memory, it would actually break direct access to
those array elements.

I think existing code has occurrences of both i,.e aligning individual elements and 
arrays.
A similar precedent exists in the function hash_estimate_size(), which only
applies maxalignment to the individual structs like HASHHDR, HASHELEMENT,
entrysize, but also an array of HASHBUCKET headers. 

I agree with you that perhaps we don't need maxalignment for all of these structures.
For ex, HASHBUCKET is a pointer to a linked list of elements, it might not require alignment
if the elements it points to are already aligned.


But there's another detail - even before this patch, most of the stuff
was allocated at once by ShmemInitStruct(). Everything except for the
elements, so to replicate the alignment we only need to worry about that
last part. So I think this should do:
 
+#define HASH_ELEMENTS_OFFSET(hctl, nsegs) \
+    CACHELINEALIGN(sizeof(HASHHDR) + \
+     ((hctl)->dsize * sizeof(HASHSEGMENT)) + \
+     ((hctl)->ssize * (nsegs) * sizeof(HASHBUCKET)))

This is what the 0003 patch does. There's still one minor difference, in
that we used to align each segment independently - each element_alloc()
call allocated a new CACHELINEALIGN-ed chunk, while now have just a
single chunk. But I think that's OK.

 
Before this patch, following structures were allocated separately using ShmemAllocRaw
directory, each segment(seg_alloc) and a chunk of elements (element_alloc). Hence, 
I don't understand why v-0003* CACHEALIGNs  in the manner it does.

I think if we want to emulate the current behaviour we should do something like:
CACHELINEALIGN(sizeof(HASHHDR) + dsize * sizeof(HASHSEGMENT)) +
                + CACHELINEALIGN(sizeof(HASHBUCKET) * ssize) * nsegs
                + CACHELINEALIGN(init_size * elementSize);

Like you mentioned the only difference would be that we would be aligning all elements
at once instead of aligning individual partitions of elements.


3) I find the comment before hash_get_init_size a bit unclear/confusing.
It says this:

 * init_size should match the total number of elements allocated during
 * hash table creation, it could be zero for non-shared hash tables
 * depending on the value of nelem_alloc. For more explanation see
 * comments within this function.
 *
 * nelem_alloc parameter is not relevant for shared hash tables.

What does "should match" mean here? Doesn't it *determine* the number of
elements allocated? What if it doesn't match?
 
by should match I mean - init_size  here  *is* equal to nelem in hash_create() .

AFAICS it means the hash table is sized to expect init_size elements,
but only nelem_alloc elements are actually pre-allocated, right?
 
No. All the init_size elements are pre-allocated for shared hash table irrespective of 
nelem_alloc value.
For non-shared hash tables init_size elements are allocated only
if it is less than nelem_alloc, otherwise they are allocated as part of expansion.
 
But the
comment says it's init_size which determines the number of elements
allocated during creation. Confusing.

It says "it could be zero ... depending on the value of nelem_alloc".
Depending how? What's the relationship.

 
The relationship is defined in this comment:
 /*
* For a shared hash table, preallocate the requested number of elements.
* This reduces problems with run-time out-of-shared-memory conditions.
*
* For a non-shared hash table, preallocate the requested number of
* elements if it's less than our chosen nelem_alloc.  This avoids wasting
* space if the caller correctly estimates a small table size.
*/

hash_create code is confusing because the nelem_alloc named variable is used
in two different cases, In  the above case  nelem_alloc  refers to the one 
returned by choose_nelem_alloc function.

The other nelem_alloc determines the number of elements in each partition
for a partitioned hash table. This is not what is being referred to in the above 
comment.

The bit "For more explanation see comments within this function" is not
great, if only because there are not many comments within the function,
so there's no "more explanation". But if there's something important, it
should be in the main comment, preferably.

 
I will improve the comment in the next version.

Thank you,
Rahila Syed

pgsql-hackers by date:

Previous
From: Álvaro Herrera
Date:
Subject: Re: Restrict publishing of partitioned table with a foreign table as partition
Next
From: Ashutosh Bapat
Date:
Subject: Re: Test to dump and restore objects left behind by regression