Re: [WIP PATCH] for Performance Improvement in Buffer Management - Mailing list pgsql-hackers

From Amit kapila
Subject Re: [WIP PATCH] for Performance Improvement in Buffer Management
Date
Msg-id 6C0B27F7206C9E4CA54AE035729E9C382853EFDA@szxeml509-mbx
Whole thread Raw
In response to Re: [WIP PATCH] for Performance Improvement in Buffer Management  (Jeff Janes <jeff.janes@gmail.com>)
Responses Re: [WIP PATCH] for Performance Improvement in Buffer Management
Re: [WIP PATCH] for Performance Improvement in Buffer Management
List pgsql-hackers
On Saturday, October 20, 2012 11:03 PM Jeff Janes wrote:
On Fri, Sep 7, 2012 at 6:14 AM, Amit kapila <amit.kapila@huawei.com> wrote:
> On Thursday, September 06, 2012 2:38 PM Amit kapila wrote:
> On Tuesday, September 04, 2012 6:55 PM Amit kapila wrote:
> On Tuesday, September 04, 2012 12:42 AM Jeff Janes wrote:
> On Mon, Sep 3, 2012 at 7:15 AM, Amit kapila <amit.kapila@huawei.com> wrote:
>>>> This patch is based on below Todo Item:
>>
>>>> Consider adding buffers the background writer finds reusable to the free
>>>> list
>
>>> The results for the updated code is attached with this mail.
>>> The scenario is same as in original mail.
>>>    1. Load all the files in to OS buffers (using pg_prewarm with 'read' operation) of all tables and indexes.
>>>    2. Try to load all buffers with "pgbench_accounts" table and "pgbench_accounts_pkey" pages (using pg_prewarm
with'buffers' operation). 
>>>    3. Run the pgbench with select only for 20 minutes.
>
>>> Platform details:
>>>    Operating System: Suse-Linux 10.2 x86_64
>>>    Hardware : 4 core (Intel(R) Xeon(R) CPU L5408 @ 2.13GHz)
>>>    RAM : 24GB
>
>>> Server Configuration:
>>>    shared_buffers = 5GB     (1/4 th of RAM size)
>>>    Total data size = 16GB
>>> Pgbench configuration:
>>>        transaction type: SELECT only
>>>        scaling factor: 1200
>>>        query mode: simple
>>>        number of clients: <varying from 8 to 64 >
>>>        number of threads: <varying from 8 to 64 >
>>>        duration: 1200 s
>
>>> I shall take further readings for following configurations and post the same:
>>> 1. The intention for taking with below configuration is that, with the defined testcase, there will be some cases
whereI/O can happen. So I wanted to check the 
>>> impact of it.
>
>>> Shared_buffers - 7 GB
>>> number of clients: <varying from 8 to 64 >
>>> number of threads: <varying from 8 to 64 >
>>> transaction type: SELECT only
>
>> The data for shared_buffers = 7GB is attached with this mail. I have also attached scripts used to take this data.

> Is this result reproducible?  Did you monitor IO (with something like
>vmstat) to make sure there was no IO going on during the runs?

Yes, I have reproduced it 2 times. However I shall reproduce once more and use vmstat as well.
I have not observed with vmstat but it is observable in the data.
When I have kept shared buffers = 5G, the tps is more and when I increased it to 7G, the tps is reduced which shows
thereis some I/O started happening. 
When I increased to 10G, the tps reduced drastically which shows there is lot of I/O. Tommorow I will post 10G shared
buffersdata as well. 

>Run the modes in reciprocating order?
Sorry, I didn't understood this, What do you mean by modes in reciprocating order?

> If you have 7GB of shared_buffers and 16GB of database, that comes out
> to 23GB of data to be held in 24GB of RAM.  In my experience it is
> hard to get that much data cached by simple prewarm. the newer data
> will drive out the older data even if technically there is room.  So
> then when you start running the benchmark, you still have to read in
> some of the data which dramatically slows down the benchmark.

Yes with 7G, the chances of doing I/O is high but with 5G, chances are less which is observed in the data as well(TPS
in7G data is less than in 5G). 
Please see the results of 5G shared buffers in mail below:
http://archives.postgresql.org/pgsql-hackers/2012-09/msg00318.php

In 7G case, you can see in the data that without this patch, the tps with original code is quite less as compare to 5G
data.
I am sorry, there is one typo error in 7G shared buffers data, it is mentioned wrongly 5G in heading of data.

>I haven't been able to detect any reliable difference in performance
>with this patch.  I've been testing with 150 scale factor with 4GB of
>ram and 4 cores, over a variety of shared_buffers and concurrencies.

I think the main reason for this is that when shared buffers are less, then there is no performance gain,
even the same is observed by me when I ran this test with shared buffers=2G, there is no performance gain.
Please see the results of shared buffers=2G in below mail:
http://archives.postgresql.org/pgsql-hackers/2012-09/msg00422.php

The reason I can think of is because when shared buffers are less then clock sweep runs very fast and there is no
bottleneck.
Only when shared buffers increase above some threshhold, it spends reasonable time in clock sweep.

I shall once run with the same configuration as mentioned by you, but I think it will not give any performance gain due
toreason mentioned above. 
Is it feasible for you to run with higher shared buffers and also somewhat large data and RAM.
Basically I want to know if you can mimic the situation mentioned by tests I have posted. In anycase I shall run the
testsonce again and post the data. 


With Regards,
Amit Kapila.




pgsql-hackers by date:

Previous
From: Magnus Hagander
Date:
Subject: Re: Successor of MD5 authentication, let's use SCRAM
Next
From: Pavel Stehule
Date:
Subject: Re: enhanced error fields