Thread: About these IPC parameters

About these IPC parameters

From
Peter Eisentraut
Date:
I'm trying to sort out the documentation regarding the SysV IPC settings,
but I better understand them myself first. :)

We use three shared-memory segments: One is for the spin locks and is of
negligible size (144 bytes currently). The other two I don't know, but one
of them seems to be sized about 550kB + -B * BLCKSZ

My kernel has the following interesting-looking shared memory settings:

SHMMAX    -- max size per segment. Apparently must be >= 550kB + -B * BLCKSZ
SHMMNI    -- max number of segments system wide, better be >= 3
SHMSEG    -- max number of segments per process, also better be >= 3
SHMALL    -- max number of pages for shmem system wide. This seems to be          fixed at some theoretical amount.

The most promising thing to promote here is evidently to raise SHMMAX.


For semaphores, we're using ceil(-N % 16) sets of 16 semaphores. In my
kernel I see:

SEMMNI    -- max number of semaphore "identifiers" (=sets?)
SEMMSL    -- max semaphores per set, this is explained in storage/proc.h
SEMMNS    -- max semaphores in system

So, SEMMNI and SEMMNS seem to be the most promising settings to change.

Is there any noteworthy relevance of some of the other parameters? I see
FAQ_BSDI talks about SEMUME and SEMMNU.


-- 
Peter Eisentraut                  Sernanders väg 10:115
peter_e@gmx.net                   75262 Uppsala
http://yi.org/peter-e/            Sweden



Re: About these IPC parameters

From
Tom Lane
Date:
Peter Eisentraut <peter_e@gmx.net> writes:
> We use three shared-memory segments: One is for the spin locks and is of
> negligible size (144 bytes currently). The other two I don't know, but one
> of them seems to be sized about 550kB + -B * BLCKSZ

The shmem sizes depend on both -B and -N, but the dependence on -B is
much stronger.  Obviously there's 8K per -B for the buffer itself,
and there's also some allowance for hashtable entries for the buffer
indexing tables.  The -N number drives the size of the PROC table plus
some hashtables --- but a PROC entry isn't very big.

I believe there's no really fundamental reason why we use three shmem
segments and not just one.  I've toyed with the idea of trying to
combine them, but not done anything about it yet...

> My kernel has the following interesting-looking shared memory settings:

FWIW, HPUX does not have SHMALL --- and since HPUX began life as SysV
I would imagine a lot of other SysV derivatives don't either.  The
relevant parameters here seem to be

SEMA            Enable Sys V Semaphores
SEMAEM          Max Value for Adjust on Exit Semaphores
SEMMAP          Max Number of Semaphore Map Entries
SEMMNI          Number of Semaphore Identifiers
SEMMNS          Max Number of Semaphores
SEMMNU          Number of Semaphore Undo Structures
SEMUME          Semaphore Undo Entries per Process
SEMVMX          Semaphore Maximum Value
SHMEM           Enable Sys V Shared Memory
SHMMAX          Max Shared Mem Segment (bytes)
SHMMNI          Number of Shared Memory Identifiers
SHMSEG          Shared Memory Segments per Process

Other than shooting yourself in the foot by having SEMA or SHMEM be
0 (OFF), it looks like the parameters that could need raising on this
platform would be SEMMAP, SEMMNI, SEMMNS, SHMMAX.

> Is there any noteworthy relevance of some of the other parameters? I see
> FAQ_BSDI talks about SEMUME and SEMMNU.

AFAIK we don't use semaphore undo, so those are red herrings.
        regards, tom lane


Re: About these IPC parameters

From
The Hermit Hacker
Date:
On Thu, 20 Jul 2000, Tom Lane wrote:

> Peter Eisentraut <peter_e@gmx.net> writes:
> > We use three shared-memory segments: One is for the spin locks and is of
> > negligible size (144 bytes currently). The other two I don't know, but one
> > of them seems to be sized about 550kB + -B * BLCKSZ
> 
> The shmem sizes depend on both -B and -N, but the dependence on -B is
> much stronger.  Obviously there's 8K per -B for the buffer itself,
> and there's also some allowance for hashtable entries for the buffer
> indexing tables.  The -N number drives the size of the PROC table plus
> some hashtables --- but a PROC entry isn't very big.
> 
> I believe there's no really fundamental reason why we use three shmem
> segments and not just one.  I've toyed with the idea of trying to
> combine them, but not done anything about it yet...
> 
> > My kernel has the following interesting-looking shared memory settings:
> 
> FWIW, HPUX does not have SHMALL --- and since HPUX began life as SysV
> I would imagine a lot of other SysV derivatives don't either.  The
> relevant parameters here seem to be
> 
> SEMA            Enable Sys V Semaphores
> SEMAEM          Max Value for Adjust on Exit Semaphores
> SEMMAP          Max Number of Semaphore Map Entries
> SEMMNI          Number of Semaphore Identifiers
> SEMMNS          Max Number of Semaphores
> SEMMNU          Number of Semaphore Undo Structures
> SEMUME          Semaphore Undo Entries per Process
> SEMVMX          Semaphore Maximum Value
> SHMEM           Enable Sys V Shared Memory
> SHMMAX          Max Shared Mem Segment (bytes)
> SHMMNI          Number of Shared Memory Identifiers
> SHMSEG          Shared Memory Segments per Process
> 
> Other than shooting yourself in the foot by having SEMA or SHMEM be
> 0 (OFF), it looks like the parameters that could need raising on this
> platform would be SEMMAP, SEMMNI, SEMMNS, SHMMAX.
> 
> > Is there any noteworthy relevance of some of the other parameters? I see
> > FAQ_BSDI talks about SEMUME and SEMMNU.
> 
> AFAIK we don't use semaphore undo, so those are red herrings.

First off, this might be something we need a whole seperate FAQ for, since
I think the concepts are pretty much common across the various OSs?

for instance,  under FreeBSD, I have it set right now as:

====
options         SYSVSHM
options         SHMMAXPGS=4096
options         SHMSEG=256

options         SYSVSEM
options         SEMMNI=256
options         SEMMNS=512
options         SEMMNU=256
options         SEMMAP=256

options         SYSVMSG                 #SYSV-style message queues
====

To run three postmasters, one with '-B 256 -N 128', and the other two just
with '-N 16' ... the thing that I just don't get is how the settings ni my
kernel apply, and trying to find any info on taht is like pulling teeth :(
For instance, I'm allowing for up to 160 clients to connect, max .. does
that make for one semaphore identifier per client, so I need SEMMNI >=
160?  Or ... ?

I grab'd this off a Sun site dealing with Solaris, but it might also be of
aid:

 Name    Default Max             Brief Description ------  ------- --------------
-------------------------------------
 semmap  10      2147483647      Number of entries in semaphore map
 semmni  10      65535           Number of semaphore sets (identifiers)
 semmns  60      2147483647      Number of semaphores in the system                 65535 (usage)
 semmnu  30      2147483647      Number of "undo" structures in the system
 semmsl  25      2147483647      Max number of semaphores, per semaphore id                 65535 (usage)
 semopm  10      2147483647      Max number of operations, per semaphore call
 semume  10      2147483647      Max number of "undo" entries, per process
 semusz  96      *see below*     Size in bytes of "undo" structure
 semvmx  32767   2147483647      Semaphore maximum value                 65535 (usage)
 semaem  16384   2147483647      Adjust on exit maximum value                 32767 (usage)
 Detailed Descriptions ---------------------
 semmap
 Defines the size of the semaphore resource map;  each block of available, contiguous semaphores requires one entry in
thismap.  This is the pool from which semget(2) acquires semaphore sets.
 
 When a semaphore set is removed (deleted), if the block of semaphores to be freed is adjacent to a block of semaphores
alreadyin the resource map, the semaphores in the set being removed are added to the existing map entry; no new map
entryis required.  If the semaphores in the removed set are not adjacent to those in an existing map entry, then a new
mapentry is required to track these semaphores;  if there are no more map entries available, the system has to discard
anentry, 'permanently' losing a block of semaphores (permanence is relative;  a reboot fixes the problem).  If this
shouldoccur, a WARNING will be generated, the text of which will be something like "rmallocmap: rmap overflow, lost
...".  The end result is that a user could later get ENOSPC errors from semget(2) even though it doesn't look like all
thesemaphores are allocated.
 
 semmni
 Defines the number of semaphore sets (identifiers), system wide.  Every semaphore set in the system has a unique
indentifierand control structure. The system pre-allocates kernel memory for semmni control structures;  each control
structureis 84 bytes.  If no more identifiers are available, semget(2) returns ENOSPC.
 
 Attempting to set semmni to a value greater than 65535 will result in generation of a WARNING, and the value will be
setto 65535.
 
 semmns
 Defines the number of semaphores in the system;  16 bytes of kernel memory is pre-allocated for each semaphores.  If
thereis not a large enough block of contiguous semaphores in the resource map (see semmap) to satisfy the request,
semget(2)returns ENOSPC.
 
 Fragmentation of the semaphore map will result in ENOSPC errors, even though there may appear to be ample free
semaphores. Despite attempts by the system to merge free sets (see semmap), the size of the clusters of free semaphores
generallydecreases over time.  For this reason, semmns frequently must be set higher than the actual number of
semaphoresrequired.
 
 semmnu
 Defines the number of semaphore undo structures in the system.  semusz (see below) bytes of kernel memory are
pre-allocatedfor each undo structure; one undo structure is required for every process for which undo information must
berecorded.  semop() will return ENOSPC if it is requested to record undo information and there are no undo structures
available.
 semmsl
 Limits the number of semaphores that can be created for a single semaphore id. If semget(2) returns EINVAL, this limit
shouldbe increased.  This parameter is only used to validate the argument passed to semget(2).  Logically, it should be
lessthan or equal to semmns (see above).  Setting semmsl too high might allow a few identifiers to hog all the
semaphoresin the system.
 
 semopm
 Limits the number of operations that are allowed in a single semop(2) call. If semop(2) returns E2BIG, this limit
shouldbe increased.  This parameter is only used to validate the argument passed to semop(2).
 
 semume
 Limits the number of undo records that can exist for a process.  If semop(2) returns EINVAL, this limit should be
increased. In addition to its use in validating arguments to semop(2), this parameter is used to calculate the value of
semusz(see below).
 
 semusz
 Defines the size of the semaphore undo structure.  Any attempt to modify this parameter directly will be ignored;
semuszis always calculated based upon the value of semume (see above);  semusz = 8 * (semume + 2).
 
 semvmx
 Limits the maximum value of a semaphore.  Due to the interaction with undo structures and semaem (see below), this
tuneableshould not be increased beyond its default value of 32767, unless you can guarantee that SEM_UNDO is never and
willnever be used.  It can be safely reduced, but doing so provides no savings.
 
 semaem
 Limits the maximum value of an adjust-on-exit undo element.  No system resources are allocated based on this value.



Re: About these IPC parameters

From
Peter Eisentraut
Date:
Tom Lane writes:

> Other than shooting yourself in the foot by having SEMA or SHMEM be
> 0 (OFF), it looks like the parameters that could need raising on this
> platform would be SEMMAP, SEMMNI, SEMMNS, SHMMAX.

Can you give me a couple of lines on how to change them (e.g., edit some
file and reboot) and perhaps a comment whether some of these tend to be
too low in the default configuration?


-- 
Peter Eisentraut                  Sernanders väg 10:115
peter_e@gmx.net                   75262 Uppsala
http://yi.org/peter-e/            Sweden



Re: About these IPC parameters

From
Peter Eisentraut
Date:
The Hermit Hacker writes:

> First off, this might be something we need a whole seperate FAQ for, since
> I think the concepts are pretty much common across the various OSs?

Working on that...

> for instance,  under FreeBSD, I have it set right now as:

Is SysV IPC still off in stock FreeBSD kernels?

> For instance, I'm allowing for up to 160 clients to connect, max .. does
> that make for one semaphore identifier per client, so I need SEMMNI >=
> 160?  Or ... ?

SEMMNI = 10

> I grab'd this off a Sun site dealing with Solaris, but it might also be of
> aid:

Yes, that helped me a lot. I wrote a section about all this for the Admin
Guide. It should pop up in the next day or so.


-- 
Peter Eisentraut                  Sernanders väg 10:115
peter_e@gmx.net                   75262 Uppsala
http://yi.org/peter-e/            Sweden



Re: About these IPC parameters

From
The Hermit Hacker
Date:
On Fri, 21 Jul 2000, Peter Eisentraut wrote:

> The Hermit Hacker writes:
> 
> > First off, this might be something we need a whole seperate FAQ for, since
> > I think the concepts are pretty much common across the various OSs?
> 
> Working on that...
> 
> > for instance,  under FreeBSD, I have it set right now as:
> 
> Is SysV IPC still off in stock FreeBSD kernels?

Checking the GENERIC config file, it is enabled by default now ...

> > For instance, I'm allowing for up to 160 clients to connect, max .. does
> > that make for one semaphore identifier per client, so I need SEMMNI >=
> > 160?  Or ... ?
> 
> SEMMNI = 10

Ouch ... so I'm running a bit high on values :)




Re: About these IPC parameters

From
"Henry B. Hotz"
Date:
At 3:34 PM -0300 7/21/00, The Hermit Hacker wrote:
>On Fri, 21 Jul 2000, Peter Eisentraut wrote:
> > Is SysV IPC still off in stock FreeBSD kernels?
>
>Checking the GENERIC config file, it is enabled by default now ...

It's on by default in NetBSD also.


Signature failed Preliminary Design Review.
Feasibility of a new signature is currently being evaluated.
h.b.hotz@jpl.nasa.gov, or hbhotz@oxy.edu


Re: About these IPC parameters

From
Tom Lane
Date:
Peter Eisentraut <peter_e@gmx.net> writes:
> Tom Lane writes:
>> Other than shooting yourself in the foot by having SEMA or SHMEM be
>> 0 (OFF), it looks like the parameters that could need raising on this
>> platform would be SEMMAP, SEMMNI, SEMMNS, SHMMAX.

> Can you give me a couple of lines on how to change them (e.g., edit some
> file and reboot) and perhaps a comment whether some of these tend to be
> too low in the default configuration?

On HPUX the usual advice is "use SAM" (System Administration Manager).
It's a pretty decent point-and-drool tool.  You go into Kernel
Configuration / Configurable Parameters and double-click on the items
you don't like in the resulting list.  When you're done, hit Create
A New Kernel.  SAM used to have some memorable deficiencies (I still
recall that when I first used it, if you let it create a user's home
directory it would leave /users world-writable...) but it seems reliable
enough in HPUX 10.

If I've found the right file to look at, the factory defaults are

semmni              64          Number of Semaphore Identifiers       
semmns              128         Max Number of Semaphores           
shmmax              0x4000000   Max Shared Mem Segment (bytes) 
shmmni              200         Number of Shared Memory Identifiers 
shmseg              120         Shared Memory Segments per Process 

so you'd need to raise these to run a big installation (more than,
say, 100 backends) but not for a default-sized setup.

What I tend to want to raise are not the IPC parameters but

maxdsiz         0x04000000     Max Data Segment Size (bytes)        
maxssiz         0x00800000     Max Stack Segment Size (bytes)    
maxfiles        60             Soft File Limit per Process           
maxfiles_lim    1024           Hard File Limit per Process           
maxuprc         75             Max Number of User Processes (per user)
maxusers        32             Value of MAXUSERS macro
nfile           (16*(NPROC+16+MAXUSERS)/10+32+2*(NPTY+NSTRPTY))         Max Number of Open Files
ninode          ((NPROC+16+MAXUSERS)+32+(2*NPTY)+(10*NUM_CLIENTS))      Max Number of Open Inodes

In particular, the default maxuprc would definitely be a problem for
running a lot of backends, and you'd likely start running into nfile
or ninode limits too.
        regards, tom lane


Re: About these IPC parameters

From
Bruce Momjian
Date:
> So, SEMMNI and SEMMNS seem to be the most promising settings to change.
> 
> Is there any noteworthy relevance of some of the other parameters? I see
> FAQ_BSDI talks about SEMUME and SEMMNU.

I wrote FAQ_BSDI because it was not trivial to figure out how to modify
those parameters.  I figured other OS's either don't need to do it, or
have an easier way of doing it.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: About these IPC parameters

From
The Hermit Hacker
Date:
Peter ...
Here is the 'latest and greatest' NOTES that one of the FreeBSD
guys has been working on for shared memory/semaphores ... not sure if it
helps or not, but I believe it was you that was working on "organizing
this"?

=============================
#####################################################################
# SYSV IPC KERNEL PARAMETERS
#
# Maximum number of entries in a semaphore map.
options         SEMMAP=31

# Maximum number of System V semaphores that can be used on the system at
# one time.
options         SEMMNI=11

# Total number of semaphores system wide
options         SEMMNS=61

# Total number of undo structures in system
options         SEMMNU=31

# Maximum number of System V semaphores that can be used by a single
process
# at one time.
options         SEMMSL=61

# Maximum number of operations that can be outstanding on a single System
V
# semaphore at one time.
options         SEMOPM=101

# Maximum number of undo operations that can be outstanding on a single
# System V semaphore at one time.
options         SEMUME=11

# Maximum number of shared memory pages system wide.
options         SHMALL=1025

# Maximum size, in bytes, of a single System V shared memory region.
options         SHMMAX="(SHMMAXPGS*PAGE_SIZE+1)"
options         SHMMAXPGS=1025

# Minimum size, in bytes, of a single System V shared memory region.
options         SHMMIN=2

# Maximum number of shared memory regions that can be used on the system
# at one time.
options         SHMMNI=33

# Maximum number of System V shared memory regions that can be attached to
# a single process at one time.
options         SHMSEG=9
========================================

On Thu, 27 Jul 2000, Bruce Momjian wrote:

> > So, SEMMNI and SEMMNS seem to be the most promising settings to change.
> > 
> > Is there any noteworthy relevance of some of the other parameters? I see
> > FAQ_BSDI talks about SEMUME and SEMMNU.
> 
> I wrote FAQ_BSDI because it was not trivial to figure out how to modify
> those parameters.  I figured other OS's either don't need to do it, or
> have an easier way of doing it.
> 
> -- 
>   Bruce Momjian                        |  http://candle.pha.pa.us
>   pgman@candle.pha.pa.us               |  (610) 853-3000
>   +  If your life is a hard drive,     |  830 Blythe Avenue
>   +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
> 

Marc G. Fournier                   ICQ#7615664               IRC Nick: Scrappy
Systems Administrator @ hub.org 
primary: scrappy@hub.org           secondary: scrappy@{freebsd|postgresql}.org 



Re: About these IPC parameters

From
Bruce Momjian
Date:
The IPC killer is that different OS's have different methods for
changing kernel parameters, and some have different kernel parameter
names.

> Peter Eisentraut <peter_e@gmx.net> writes:
> > Tom Lane writes:
> >> Other than shooting yourself in the foot by having SEMA or SHMEM be
> >> 0 (OFF), it looks like the parameters that could need raising on this
> >> platform would be SEMMAP, SEMMNI, SEMMNS, SHMMAX.
> 
> > Can you give me a couple of lines on how to change them (e.g., edit some
> > file and reboot) and perhaps a comment whether some of these tend to be
> > too low in the default configuration?
> 
> On HPUX the usual advice is "use SAM" (System Administration Manager).
> It's a pretty decent point-and-drool tool.  You go into Kernel
> Configuration / Configurable Parameters and double-click on the items
> you don't like in the resulting list.  When you're done, hit Create
> A New Kernel.  SAM used to have some memorable deficiencies (I still
> recall that when I first used it, if you let it create a user's home
> directory it would leave /users world-writable...) but it seems reliable
> enough in HPUX 10.
> 
> If I've found the right file to look at, the factory defaults are
> 
> semmni              64          Number of Semaphore Identifiers       
> semmns              128         Max Number of Semaphores           
> shmmax              0x4000000   Max Shared Mem Segment (bytes) 
> shmmni              200         Number of Shared Memory Identifiers 
> shmseg              120         Shared Memory Segments per Process 
> 
> so you'd need to raise these to run a big installation (more than,
> say, 100 backends) but not for a default-sized setup.
> 
> What I tend to want to raise are not the IPC parameters but
> 
> maxdsiz         0x04000000     Max Data Segment Size (bytes)        
> maxssiz         0x00800000     Max Stack Segment Size (bytes)    
> maxfiles        60             Soft File Limit per Process           
> maxfiles_lim    1024           Hard File Limit per Process           
> maxuprc         75             Max Number of User Processes (per user)
> maxusers        32             Value of MAXUSERS macro
> nfile           (16*(NPROC+16+MAXUSERS)/10+32+2*(NPTY+NSTRPTY))         Max Number of Open Files
> ninode          ((NPROC+16+MAXUSERS)+32+(2*NPTY)+(10*NUM_CLIENTS))      Max Number of Open Inodes
> 
> In particular, the default maxuprc would definitely be a problem for
> running a lot of backends, and you'd likely start running into nfile
> or ninode limits too.
> 
>             regards, tom lane
> 


--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026