Re: optimizing vacuum truncation scans - Mailing list pgsql-hackers

From Haribabu Kommi
Subject Re: optimizing vacuum truncation scans
Date
Msg-id CAJrrPGeC9_bEzEOKi0WHhyCFP+_MSNX=H+SXedVHbBejHXCxnA@mail.gmail.com
Whole thread Raw
In response to Re: optimizing vacuum truncation scans  (Haribabu Kommi <kommi.haribabu@gmail.com>)
List pgsql-hackers
On Mon, Jul 13, 2015 at 5:16 PM, Haribabu Kommi
<kommi.haribabu@gmail.com> wrote:
> On Mon, Jul 13, 2015 at 12:06 PM, Haribabu Kommi
> <kommi.haribabu@gmail.com> wrote:
>> On Thu, Jul 9, 2015 at 5:36 PM, Haribabu Kommi <kommi.haribabu@gmail.com> wrote:
>>>
>>> I will do some performance tests and send you the results.
>>
>> Here are the performance results tested on my machine.
>>
>>
>>                              Head          vm patch            vm+prefetch patch
>>
>> First vacuum        120sec        <1sec                 <1sec
>> second vacuum    180 sec       180 sec                30 sec
>>
>> I did some modifications in the code to skip the vacuum truncation by
>> the first vacuum command.
>> This way I collected the second vacuum time taken performance.
>>
>> I just combined your vm and prefetch patch into a single patch
>> vm+prefetch patch without a GUC.
>> I kept the prefetch as 32 and did the performance test. I chosen
>> prefetch based on the current
>> buffer access strategy, which is 32 for vacuum presently instead of an
>> user option.
>> Here I attached the modified patch with both vm+prefetch logic.
>>
>> I will do some tests on a machine with SSD and let you know the
>> results. Based on these results,
>> we can decide whether we need a GUC or not? based on the impact of
>> prefetch on ssd machines.
>
> Following are the performance readings on a machine with SSD.
> I increased the pgbench scale factor to 1000 in the test instead of 500
> to show a better performance numbers.
>
>                              Head           vm patch        vm+prefetch patch
>
> First vacuum        6.24 sec       2.91 sec           2.91 sec
> second vacuum    6.66 sec       6.66 sec           7.19 sec
>
> There is a small performance impact on SSD with prefetch.

The above prefetch overhead is observed with prefeching of 1639345 pages.
I feel this overhead is small.

Hi Jeff,

If you are fine with earlier attached patch, then I will mark this patch as
ready for committer, to get some committer view on the patch.


Regards,
Hari Babu
Fujitsu Australia



pgsql-hackers by date:

Previous
From: Jim Nasby
Date:
Subject: Re: Implementation of global temporary tables?
Next
From: Kouhei Kaigai
Date:
Subject: Re: ctidscan as an example of custom-scan (Re: [v9.5] Custom Plan API)