On Fri, Sep 7, 2012 at 6:14 AM, Amit kapila <amit.kapila@huawei.com> wrote:
> On Thursday, September 06, 2012 2:38 PM Amit kapila wrote:
> On Tuesday, September 04, 2012 6:55 PM Amit kapila wrote:
> On Tuesday, September 04, 2012 12:42 AM Jeff Janes wrote:
> On Mon, Sep 3, 2012 at 7:15 AM, Amit kapila <amit.kapila@huawei.com> wrote:
>>>> This patch is based on below Todo Item:
>>
>>>> Consider adding buffers the background writer finds reusable to the free
>>>> list
>
>> The results for the updated code is attached with this mail.
>> The scenario is same as in original mail.
>> 1. Load all the files in to OS buffers (using pg_prewarm with 'read' operation) of all tables and indexes.
>> 2. Try to load all buffers with "pgbench_accounts" table and "pgbench_accounts_pkey" pages (using pg_prewarm with
'buffers'operation).
>> 3. Run the pgbench with select only for 20 minutes.
>
>> Platform details:
>> Operating System: Suse-Linux 10.2 x86_64
>> Hardware : 4 core (Intel(R) Xeon(R) CPU L5408 @ 2.13GHz)
>> RAM : 24GB
>
>> Server Configuration:
>> shared_buffers = 5GB (1/4 th of RAM size)
>> Total data size = 16GB
>> Pgbench configuration:
>> transaction type: SELECT only
>> scaling factor: 1200
>> query mode: simple
>> number of clients: <varying from 8 to 64 >
>> number of threads: <varying from 8 to 64 >
>> duration: 1200 s
>
>> I shall take further readings for following configurations and post the same:
>> 1. The intention for taking with below configuration is that, with the defined testcase, there will be some cases
whereI/O can happen. So I wanted to check the
>> impact of it.
>
>> Shared_buffers - 7 GB
>> number of clients: <varying from 8 to 64 >
>> number of threads: <varying from 8 to 64 >
>> transaction type: SELECT only
>
> The data for shared_buffers = 7GB is attached with this mail. I have also attached scripts used to take this data.
Is this result reproducible? Did you monitor IO (with something like
vmstat) to make sure there was no IO going on during the runs? Run
the modes in reciprocating order?
If you have 7GB of shared_buffers and 16GB of database, that comes out
to 23GB of data to be held in 24GB of RAM. In my experience it is
hard to get that much data cached by simple prewarm. the newer data
will drive out the older data even if technically there is room. So
then when you start running the benchmark, you still have to read in
some of the data which dramatically slows down the benchmark.
I haven't been able to detect any reliable difference in performance
with this patch. I've been testing with 150 scale factor with 4GB of
ram and 4 cores, over a variety of shared_buffers and concurrencies.
Cheers,
Jeff