Thread: hugepage configuration for V.9.4.0

hugepage configuration for V.9.4.0

From
John Scalia
Date:
I'm certain that I'm no expert for this one, as I've never had to configure this parameter for anything prior, but I
continueto get a startup error when I try to use this. The  
server is a VM running CentOS 6.5 with 4 Gb allocated to it. When I started setting "huge_pages = on", the server
reported:

%FATAL: could not map anonymous shared memory: Cannot allocate memory
%HINT: this error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap
space,or huge pages. To reduce the request size (currently  
1124876288 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.

Further research showed that server's /sys/kernel/mm/transparent_hugepage/enabled file contained "[always] madvise
never"

As I was concerned about the "always" setting, I used "cat madvise > " to the file so it reported "always [madvise]
never"I even set this in /etc/rc.local and performed a reboot. 

Regardless of which setting, however, I receive the same failure message. Per its suggestions, my settings are
shared_buffers= 1024Mb and max_connections = 100. 

Should I reduce these values? Is a 4 Gb test server too small to use huge_pages? The server does run just fine with
"huge_pages= try" or "off". What else should I be checking? 
--
Jay




Re: hugepage configuration for V.9.4.0

From
Scott Whitney
Date:
Usually I tweak the kernel shmmax, shmmni and shmall values in /proc/sys.

4GB _might_ be too small, in fact, but tweaking those parameters ought to get you started at least.

http://www.postgresql.org/docs/9.0/static/kernel-resources.html

----- On Jan 29, 2015, at 11:54 AM, John Scalia <jayknowsunix@gmail.com> wrote:
I'm certain that I'm no expert for this one, as I've never had to configure this parameter for anything prior, but I continue to get a startup error when I try to use this. The
server is a VM running CentOS 6.5 with 4 Gb allocated to it. When I started setting "huge_pages = on", the server reported:

%FATAL: could not map anonymous shared memory: Cannot allocate memory
%HINT: this error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently
1124876288 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.

Further research showed that server's /sys/kernel/mm/transparent_hugepage/enabled file contained "[always] madvise never"

As I was concerned about the "always" setting, I used "cat madvise > " to the file so it reported "always [madvise] never" I even set this in /etc/rc.local and performed a reboot.

Regardless of which setting, however, I receive the same failure message. Per its suggestions, my settings are shared_buffers = 1024Mb and max_connections = 100.

Should I reduce these values? Is a 4 Gb test server too small to use huge_pages? The server does run just fine with "huge_pages = try" or "off". What else should I be checking?
--
Jay




--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



Journyx, Inc.
7600 Burnet Road #300
Austin, TX 78757
www.journyx.com

p 512.834.8888 
f 512-834-8858 

Do you receive our promotional emails? You can subscribe or unsubscribe to those emails here


Re: hugepage configuration for V.9.4.0

From
Michael Heaney
Date:
On 1/29/2015 12:54 PM, John Scalia wrote:
> I'm certain that I'm no expert for this one, as I've never had to configure this parameter for anything prior, but I
continueto get a startup error when I try to use this. The 
> server is a VM running CentOS 6.5 with 4 Gb allocated to it. When I started setting "huge_pages = on", the server
reported:
>
> %FATAL: could not map anonymous shared memory: Cannot allocate memory
> %HINT: this error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap
space,or huge pages. To reduce the request size (currently 
> 1124876288 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
>
> ...

More good information here:

http://www.postgresql.org/docs/9.4/static/kernel-resources.html#LINUX-HUGE-PAGES

I don't think huge pages is going to make much of a difference for a 4GB
server, though.

--
Michael Heaney
JCVI




Re: hugepage configuration for V.9.4.0

From
Tom Lane
Date:
John Scalia <jayknowsunix@gmail.com> writes:
> I'm certain that I'm no expert for this one, as I've never had to configure this parameter for anything prior, but I
continueto get a startup error when I try to use this. The  
> server is a VM running CentOS 6.5 with 4 Gb allocated to it. When I started setting "huge_pages = on", the server
reported:

> %FATAL: could not map anonymous shared memory: Cannot allocate memory
> %HINT: this error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap
space,or huge pages. To reduce the request size (currently  
> 1124876288 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.

> Further research showed that server's /sys/kernel/mm/transparent_hugepage/enabled file contained "[always] madvise
never"

> As I was concerned about the "always" setting, I used "cat madvise > " to the file so it reported "always [madvise]
never"I even set this in /etc/rc.local and performed a reboot. 

FWIW, I think that the transparent_hugepage setting is irrelevant to this.
The whole point here is that we're explicitly asking for a hugepage memory
segment, so the OS doesn't have to try to make it transparent.

What might be happening is that when requesting a hugepage segment, we
actually round up the request (here claimed to be 1124876288 bytes)
to the next 2MB boundary.  Is it possible your kernel settings are
such that the slightly larger request fails?

Also, there may well be request limits that apply specifically to hugepage
segments; I don't know too much about that.

            regards, tom lane


Re: hugepage configuration for V.9.4.0

From
John Scalia
Date:
On 1/29/2015 1:18 PM, Tom Lane wrote:
> John Scalia <jayknowsunix@gmail.com> writes:
>> I'm certain that I'm no expert for this one, as I've never had to configure this parameter for anything prior, but I
continueto get a startup error when I try to use this. The 
>> server is a VM running CentOS 6.5 with 4 Gb allocated to it. When I started setting "huge_pages = on", the server
reported:
>> %FATAL: could not map anonymous shared memory: Cannot allocate memory
>> %HINT: this error usually means that PostgreSQL's request for a shared memory segment exceeded available memory,
swapspace, or huge pages. To reduce the request size (currently 
>> 1124876288 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
>> Further research showed that server's /sys/kernel/mm/transparent_hugepage/enabled file contained "[always] madvise
never"
>> As I was concerned about the "always" setting, I used "cat madvise > " to the file so it reported "always [madvise]
never"I even set this in /etc/rc.local and performed a reboot. 
> FWIW, I think that the transparent_hugepage setting is irrelevant to this.
> The whole point here is that we're explicitly asking for a hugepage memory
> segment, so the OS doesn't have to try to make it transparent.
>
> What might be happening is that when requesting a hugepage segment, we
> actually round up the request (here claimed to be 1124876288 bytes)
> to the next 2MB boundary.  Is it possible your kernel settings are
> such that the slightly larger request fails?
>
> Also, there may well be request limits that apply specifically to hugepage
> segments; I don't know too much about that.
>
>             regards, tom lane
>
Thanks to all that responded. I really kind of figured that a 4Gb server was too small, but I'm limited to that by our
virtualserver configuration system. I'll look into how this  
kernel is configured, Tom, but is there something specific that you know of for me to examine? FWIW, I didn't build
thiskernel as I have to choose one from the server build page  
when I try to create the system. I guess I could download the sources to do a build on it, once it's running, but I'm
thinkit's really not the worth the effort for one this small. 


Re: hugepage configuration for V.9.4.0

From
Tom Lane
Date:
John Scalia <jayknowsunix@gmail.com> writes:
> On 1/29/2015 1:18 PM, Tom Lane wrote:
>> Also, there may well be request limits that apply specifically to hugepage
>> segments; I don't know too much about that.

> Thanks to all that responded. I really kind of figured that a 4Gb server was too small, but I'm limited to that by
ourvirtual server configuration system. I'll look into how this  
> kernel is configured, Tom, but is there something specific that you know
> of for me to examine?

The manual section that somebody else pointed you to mentions a
vm.nr_hugepages setting ...

            regards, tom lane