On Mon, 9 Jul 2012 12:30:23 +0200
Andres Freund <andres@2ndquadrant.com> wrote:
> On Monday, July 09, 2012 08:11:00 AM Tom Lane wrote:
> > yamt@mwd.biglobe.ne.jp (YAMAMOTO Takashi) writes:
> > >> Also, I was under the impression that recent Linux kernels use
> > >> hugepages automatically if they can, so I wonder exactly what
> > >> Andres was testing on ...
> > >
> > > if you mean the "trasparent hugepage" feature, iirc it doesn't
> > > affect MAP_SHARED mappings like this.
> >
> > Oh! That would explain some things. It seems like a pretty nasty
> > restriction though ... do you know why they did that?
> Looking a bit deeper they explicitly only work on private memory. The
> reason apparently being that its too hard to update the page table
> entries in multiple processes at once without introducing locking
> problems/scalability issues.
>
> To be sure one can check /proc/$pid_of_pg_proccess/smaps and look for
> the mapping to /dev/zero or the biggest mapping ;). Its not counted as
> Anonymous memory and it doesn't have transparent hugepages. I was
> confused before because there is quite some (400mb here) huge pages
> allocated for postgres during a pgbench run but thats just all the
> local memory...
A warning, on RHEL 6.1 (2.6.32-131.4.1.el6.x86_64 #1 SMP) we have had
horrible problems caused by transparent_hugepages running postgres on
largish systems (128GB to 512GB memory, 32 cores). The system sometimes
goes 99% system time and is very slow and unresponsive to the point of
not successfully completing new tcp connections. Turning off
transparent_hugepages fixes it.
That said, explicit hugepage support for the buffer cache would be a big
win especially for high connection counts.
-dg
--
David Gould daveg@sonic.net
If simplicity worked, the world would be overrun with insects.