Re: [Patch] Optimize dropping of relation buffers using dlist - Mailing list pgsql-hackers

From Amit Kapila
Subject Re: [Patch] Optimize dropping of relation buffers using dlist
Date
Msg-id CAA4eK1+thGYM0H2CqB+BEW5sLbE8os_z1D4fZUSJWpxJKuu+cg@mail.gmail.com
Whole thread Raw
In response to RE: [Patch] Optimize dropping of relation buffers using dlist  ("Tang, Haiying" <tanghy.fnst@cn.fujitsu.com>)
Responses RE: [Patch] Optimize dropping of relation buffers using dlist
List pgsql-hackers
On Fri, Dec 25, 2020 at 9:28 AM Tang, Haiying
<tanghy.fnst@cn.fujitsu.com> wrote:
>
> Hi Amit,
>
> >But how can we conclude NBuffers/128 is the maximum relation size?
> >Because the maximum size would be where the performance is worse than
> >the master, no? I guess we need to try by NBuffers/64, NBuffers/32,
> >.... till we get the threshold where master performs better.
>
> You are right, we should keep on testing until no optimization.
>
> >I think we should find a better way to display these numbers because in
> >cases like where master takes 537.978s and patch takes 3.815s
>
> Yeah, I think we can change the %reg formula from (patched- master)/ master to (patched- master)/ patched.
>
> >Table size should be more than 8k to get all this data because 8k means
> >just one block. I guess either it is a typo or some other mistake.
>
> 8k here is the relation size, not data size.
> For example, when I tested recovery performance of 400M relation size, I used 51200 tables(8k per table).
> Please let me know if you think this is not appropriate.
>

I think one table with a varying amount of data is sufficient for the
vacuum test. I think with more number of tables there is a greater
chance of variation. We have previously used multiple tables in one of
the tests because of the Truncate operation (which uses
DropRelFileNodesAllBuffers that takes multiple relations as input) and
that is not true for Vacuum operation which I suppose you are testing
here.

-- 
With Regards,
Amit Kapila.



pgsql-hackers by date:

Previous
From: Bharath Rupireddy
Date:
Subject: Re: Parallel Inserts in CREATE TABLE AS
Next
From: Dilip Kumar
Date:
Subject: Re: Parallel Inserts in CREATE TABLE AS