Re: 8.3beta1 testing on Solaris - Mailing list pgsql-hackers

From Gregory Stark
Subject Re: 8.3beta1 testing on Solaris
Date
Msg-id 87640u95xu.fsf@oxford.xeocode.com
Whole thread Raw
In response to 8.3beta1 testing on Solaris  ("Jignesh K. Shah" <J.K.Shah@Sun.COM>)
Responses Re: [PERFORM] 8.3beta1 testing on Solaris
List pgsql-hackers
"Jignesh K. Shah" <J.K.Shah@Sun.COM> writes:

> CLOG data is not cached in any PostgreSQL shared memory segments and hence
> becomes the bottleneck as it has to constantly go to the filesystem to get
> the read data.

This is the same bottleneck you discussed earlier. CLOG reads are cached in
the Postgres shared memory segment but only NUM_CLOG_BUFFERS are which
defaults to 8 buffers of 8kb each. With 1,000 clients and the transaction rate
you're running you needed a larger number of buffers.

Using the filesystem buffer cache is also an entirely reasonable solution
though. That's surely part of the logic behind not trying to keep more of the
clog in shared memory. Do you have any measurements of how much time is being
spent just doing the logical I/O to the buffer cache for the clog pages? 4MB/s
seems like it's not insignificant but your machine is big enough that perhaps
I'm thinking at the wrong scale.

I'm really curious whether you see any benefit from the vxid read-only
transactions. I'm not sure how to get an apples to apples comparison though.
Ideally just comparing it to CVS HEAD from immediately prior to the vxid patch
going in. Perhaps calling some function which forces an xid to be allocated
and seeing how much it slows down the benchmark would be a good substitute.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com

pgsql-hackers by date:

Previous
From: Stephen Frost
Date:
Subject: Re: 8.3 GSS Issues
Next
From: Gregory Stark
Date:
Subject: Re: 8.3beta1 testing on Solaris