Re: Separate memory contexts for relcache and catcache - Mailing list pgsql-hackers
From | Ashutosh Bapat |
---|---|
Subject | Re: Separate memory contexts for relcache and catcache |
Date | |
Msg-id | CAExHW5udGKdOfq3bsuVpLYXG7dqBEdpWYfsrM5JT6ZLKobVK9A@mail.gmail.com Whole thread Raw |
In response to | Re: Separate memory contexts for relcache and catcache (Jeff Davis <pgsql@j-davis.com>) |
List | pgsql-hackers |
On Sat, Nov 2, 2024 at 3:17 AM Jeff Davis <pgsql@j-davis.com> wrote: > > On Fri, 2024-11-01 at 15:19 -0400, Andres Freund wrote: > > I'm a bit worried about the increase in "wasted" memory we might end > > up when > > creating one aset for *everything*. Just splitting out Relcache and > > CatCache > > isn't a big deal from that angle, they're always used reasonably > > much. But > > creating a bunch of barely used contexts does have the potential for > > lots of > > memory being wasted at the end of a page and on freelists. It might > > be ok as > > far was what you proposed in the above email, I haven't analyzed that > > in depth > > yet. > > Melih raised similar concerns. The new contexts that my patch created > were CatCacheContext, RelCacheContext, SPICacheContext, > PgOutputContext, PlanCacheContext, TextSearchCacheContext, and > TypCacheContext. > > Those are all created lazily, so you need to at least be using the > relevant feature before it has any cost (with the exception of the > first two). > > > > I agree with others that we should look at changing the initial > > > size or > > > type of the contexts, but that should be a separate commit. > > > > It needs to be done close together though, otherwise we'll increase > > the > > new-connection-memory-usage of postgres measurably. > > I don't have a strong opinion here; that was a passing comment. But I'm > curious: why it would increase the per-connection memory usage much to > just have a couple new memory contexts? Without patch First backend SELECT count(*), pg_size_pretty(sum(total_bytes)) as total_bytes, sum(total_nblocks) as total_nblocks, pg_size_pretty(sum(free_bytes)) free_bytes, sum(free_chunks) as free_chunks, pg_size_pretty(sum(used_bytes)) used_bytes from pg_get_backend_memory_contexts(); count | total_bytes | total_nblocks | free_bytes | free_chunks | used_bytes -------+-------------+---------------+------------+-------------+------------ 121 | 1917 kB | 208 | 716 kB | 128 | 1201 kB (1 row) Second backend SELECT count(*), pg_size_pretty(sum(total_bytes)) as total_bytes, sum(total_nblocks) as total_nblocks, pg_size_pretty(sum(free_bytes)) free_bytes, sum(free_chunks) as free_chunks, pg_size_pretty(sum(used_bytes)) used_bytes from pg_get_backend_memory_contexts(); count | total_bytes | total_nblocks | free_bytes | free_chunks | used_bytes -------+-------------+---------------+------------+-------------+------------ 121 | 1408 kB | 210 | 384 kB | 186 | 1024 kB (1 row) With both patches from Melih applied First backend SELECT count(*), pg_size_pretty(sum(total_bytes)) as total_bytes, sum(total_nblocks) as total_nblocks, pg_size_pretty(sum(free_bytes)) free_bytes, sum(free_chunks) as free_chunks, pg_size_pretty(sum(used_bytes)) used_bytes from pg_get_backend_memory_contexts(); count | total_bytes | total_nblocks | free_bytes | free_chunks | used_bytes -------+-------------+---------------+------------+-------------+------------ 124 | 1670 kB | 207 | 467 kB | 128 | 1203 kB (1 row) Second backend SELECT count(*), pg_size_pretty(sum(total_bytes)) as total_bytes, sum(total_nblocks) as total_nblocks, pg_size_pretty(sum(free_bytes)) free_bytes, sum(free_chunks) as free_chunks, pg_size_pretty(sum(used_bytes)) used_bytes from pg_get_backend_memory_contexts(); count | total_bytes | total_nblocks | free_bytes | free_chunks | used_bytes -------+-------------+---------------+------------+-------------+------------ 124 | 1417 kB | 209 | 391 kB | 187 | 1026 kB (1 row) So it looks like the patches do reduce memory allocated at the start of a backend. That is better as far as the conditions just after the backend start are concerned. The chunks of memory allocated in a given context will more likely have similar sizes since they will be allocated for the same types of objects as compared to one big context where chunks are allocated for many different kinds of objects. I believe this will lead to a better utilization of freelist. -- Best Wishes, Ashutosh Bapat
pgsql-hackers by date: