On Thu, Mar 05, 2026 at 04:25:57PM -0600, Sami Imseih wrote:
> With the above assert in place, the usable_pages of 879 ars you have
> in test_dsa.c crashes
> due to the assertion failure.
Yep, the assertion looks useful to have in place.
> As far as the fix, I do agree with what you have in 0001-, except I am
> not so sure about the "+1 for rounding".
> Can we just do ceiling division?
Ceiling division is more precise at the end, and can be checked, I'd
tend to stick with your approach.
> + /*
> + * We must also account for pagemap entries needed to cover the
> + * metadata pages themselves. The pagemap must track
> all pages in the
> + * segment, including the pages occupied by metadata.
> + */
> + metadata_bytes +=
> + ((metadata_bytes + (FPM_PAGE_SIZE -
> sizeof(dsa_pointer)) - 1) /
> + (FPM_PAGE_SIZE - sizeof(dsa_pointer))) *
> + sizeof(dsa_pointer);
>
> I don't think we should add a test_dsa, but I do think it was useful
> to verify the issue.
I think that a test would be actually nice to have in test_dsa, but the
proposed one lacks flexibility. How about changing it to a function
that takes in input the values you'd want to test for pagemap_start,
usable_pages and the offset? The point is to check a dsa_allocate()
with the sanity of an offset, hence a simple test_dsa_allocate() as
function name? The test module exists down to v17, which should be
enough to check things across platforms moving ahead.
> This sounds like it should be backpatched, but we'll see what a
> committer thinks.
This should be backpatched. If you'd have told me that the
allocation is oversized, then an optimization may have been possible
on HEAD, especially if overestimation was large. A small reduction in
size may not even be worth worrying in some cases, as we tend to
oversize some shmem areas as well. An undersized calculation is a
very different story: memory corruptions on the stack are not cool at
all, especially on stable branches, and they lead to unpredictible
behaviors.
--
Michael