Hi,
I'am going to try resolving slow buffer mappings by enabling huge pages (and in general it looks like some MMU/TLB overhead takes place on server with 192GB RAM and 48GB shared buffers, PG96). Please assist in understanding some details.
1. Does it mean that dirtying several 8kb buffers in Postgresql shared memory leads to actual 2mb writing operation? It looks like some kinds of DML can actually degrade in terms of performance.
2. With "huge_pages" set to "off" I was able to check shared memory usage by examining smaps, like:
--[root@dbtest3 ~]# grep -A 15 deleted /proc/7999/smaps
--7f6a76b5f000-7f6a7fa1b000 rw-s 00000000 00:04 64669 /dev/zero (deleted)
--Size: 146160 kB
--Rss: 11576 kB
--Pss: 8384 kB
--Shared_Clean: 0 kB
--Shared_Dirty: 5952 kB
--Private_Clean: 0 kB
--Private_Dirty: 5624 kB
--Referenced: 11576 kB
--Anonymous: 0 kB
--AnonHugePages: 0 kB
--Swap: 0 kB
--KernelPageSize: 4 kB
--MMUPageSize: 4 kB
--Locked: 0 kB
--VmFlags: rd wr sh mr mw me ms
But when huge pages are turned on, all metrics except Size, KernelPageSize, MMUPageSize are always zero. Why? Is there any workaround for this?
--[root@dbtest3 ~]# grep -B 15 'sh\ ' /proc/2475/smaps
--2aaaaac00000-2aaab3c00000 rw-s 00000000 00:0c 19963 /anon_hugepage (deleted)
--Size: 147456 kB
--Rss: 0 kB
--Pss: 0 kB
--Shared_Clean: 0 kB
--Shared_Dirty: 0 kB
--Private_Clean: 0 kB
--Private_Dirty: 0 kB
--Referenced: 0 kB
--Anonymous: 0 kB
--AnonHugePages: 0 kB
--Swap: 0 kB
--KernelPageSize: 2048 kB
--MMUPageSize: 2048 kB
--Locked: 0 kB
--VmFlags: rd wr sh mr mw me ms de ht
--..
3. Would Postgres with "huge_pages" set to "try" actually try using both huge pages and 4kb OS pages when "vm.nr_hugepages" pages exceeded? It looks like much more value than VmPeak must be set to guarantee reliability.
Thanks,
Pavel Suderevsky