Thread: Re: Solaris ISM Testing

Re: Solaris ISM Testing

From
Bruce Momjian
Date:
The attached email shows that Solaris benefits from the ISM or Intimate
Shared Memory setting during shmat() shared memory creation.  It causes
processes mapping the same shared memory to shared mapping pages _and_
locks the pages in RAM.

I know many OS's lock shared memory in RAM anyway, or have OS parameters
that control this (FreeBSD), but it seems Solaris does this on a per
shmat() basis.  Should we add this flag to shmat() calls for Solaris?

Josh, can you supply a patch for our review?

---------------------------------------------------------------------------

P.J. "Josh" Rovero wrote:
> Bruce,
> 
> Here are our results to date.  Kind of mixed.  We tested on both
> single processor (ultra 5 and ultra 10) and dual processor (ultra 60).
> Results here are from the Ultra60.  On single processor boxes, there
> was a clear change in the disk and cpu activity when ISM was used.
> Without ISM there appeared to be lower cpu load and lots of disk i/0.
> With ISM, disk activity decreased dramatically, but cpu pegged periodically.
> 
> All we did was change the flags on the shmat calls.  I don't know if
> that's all that is really needed.  At any rate, ISM doesn't *hurt*
> performance on Solaris, and can sometimes significantly improve it.
> 
> I realize that these cases don't cover all aspects of postgres performance.
> Our local tests (LTC) are using real world data, and really aren't
> suitable for other folks to run.  If there's a more comprehensive
> benchmark suite for postgres around, we'd be happy to run it.
> 
> pg_bench -- 247% increase in tps with ISM ***before tuning pgsql****
>              (from 34 to 84 tps)
> pg_bench --   3% increase in tps with ISM ***after tuning pgsq****
>                (from 80.5 to 83 tps)
> LTC 1    -- 9% increase with ISM  (local test, 4500 complex inserts)
>                 *** before tuning ***
>           -- 1% increase with ISM  ***after tuning***
> 
> LTC 2    -- 5% increase with ISM  (local test, 415000 inserts to one table,
>                                     415000 updates to another, before 
> tuning)
>           -- 3% increase with ISM  *** after tuning ***
> 
> ISM seems to make the largest difference when an arbitrary, non-optimum,
> pgsql configuration is used.  There was not a huge difference in the
> before and after tuning configurations.
> 
> Ultra 60 with 512 MB RAM, 2x450 MHz UltraSparc II.
> 
> Before: max_connections = 24
>     shared_buffers = 3072
>     sort_mem = 8192
>     vacuum_mem = 16384
>     shmmax = 2640000
> 
> After:  max_connections = 32
>     shared_buffers = 3294
>     sort_mem = 8192
>     vacuum_mem = 16384
>     shmmax = 28278784
> 
> 
> -- 
> P. J. "Josh" Rovero                                 Sonalysts, Inc.
> Email: rovero@sonalysts.com    www.sonalysts.com    215 Parkway North
> Work: (860)326-3671 or 442-4355                     Waterford CT 06385
> ***********************************************************************
> 
> 

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Solaris ISM Testing

From
Tom Lane
Date:
Bruce Momjian <pgman@candle.pha.pa.us> writes:
> The attached email shows that Solaris benefits from the ISM or Intimate
> Shared Memory setting during shmat() shared memory creation.  It causes
> processes mapping the same shared memory to shared mapping pages _and_
> locks the pages in RAM.

Huh?  I understand "locks the pages in RAM" but I don't understand the
first part of that.  ISTM shared memory is shared memory; if we didn't
share it without this flag, we'd not be working at all on Solaris.

> I know many OS's lock shared memory in RAM anyway, or have OS parameters
> that control this (FreeBSD), but it seems Solaris does this on a per
> shmat() basis.  Should we add this flag to shmat() calls for Solaris?

Certainly on any OS where we can request pinning our shmem in RAM, we
should do so --- I've pointed out before that allowing our disk buffers
to be swapped out can't be anything but counterproductive.  Not sure
that this should be thought of as an "#ifdef SOLARIS" kind of change;
do any other Unixen share this aspect of the API?
        regards, tom lane


Re: Solaris ISM Testing

From
Bruce Momjian
Date:
Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > The attached email shows that Solaris benefits from the ISM or Intimate
> > Shared Memory setting during shmat() shared memory creation.  It causes
> > processes mapping the same shared memory to shared mapping pages _and_
> > locks the pages in RAM.
> 
> Huh?  I understand "locks the pages in RAM" but I don't understand the
> first part of that.  ISTM shared memory is shared memory; if we didn't
> share it without this flag, we'd not be working at all on Solaris.

It shares the virtual page map tables as well as the actual RAM pages.

> > I know many OS's lock shared memory in RAM anyway, or have OS parameters
> > that control this (FreeBSD), but it seems Solaris does this on a per
> > shmat() basis.  Should we add this flag to shmat() calls for Solaris?
> 
> Certainly on any OS where we can request pinning our shmem in RAM, we
> should do so --- I've pointed out before that allowing our disk buffers
> to be swapped out can't be anything but counterproductive.  Not sure
> that this should be thought of as an "#ifdef SOLARIS" kind of change;
> do any other Unixen share this aspect of the API?

Yes, #ifdef SOLARIS.  I am waiting from a patch from the reporter.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Solaris ISM Testing

From
Bruce Momjian
Date:
Added to TODO:

> * Add Intimate Shared Memory(ISM) for Solaris
> * Add documentation to lock shared memory into RAM for each OS, if possible

I have re-requested the Solaris patch for ISM.

---------------------------------------------------------------------------

Tom Lane wrote:
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > The attached email shows that Solaris benefits from the ISM or Intimate
> > Shared Memory setting during shmat() shared memory creation.  It causes
> > processes mapping the same shared memory to shared mapping pages _and_
> > locks the pages in RAM.
> 
> Huh?  I understand "locks the pages in RAM" but I don't understand the
> first part of that.  ISTM shared memory is shared memory; if we didn't
> share it without this flag, we'd not be working at all on Solaris.
> 
> > I know many OS's lock shared memory in RAM anyway, or have OS parameters
> > that control this (FreeBSD), but it seems Solaris does this on a per
> > shmat() basis.  Should we add this flag to shmat() calls for Solaris?
> 
> Certainly on any OS where we can request pinning our shmem in RAM, we
> should do so --- I've pointed out before that allowing our disk buffers
> to be swapped out can't be anything but counterproductive.  Not sure
> that this should be thought of as an "#ifdef SOLARIS" kind of change;
> do any other Unixen share this aspect of the API?
> 
>             regards, tom lane
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
> 
> http://www.postgresql.org/users-lounge/docs/faq.html
> 

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026