Re: Using read_stream in index vacuum - Mailing list pgsql-hackers

From Junwang Zhao
Subject Re: Using read_stream in index vacuum
Date
Msg-id CAEG8a3JB+WG9FKmm6cFJn+psJmoiVFvV-N=WEdo0YFcoUSQc3Q@mail.gmail.com
Whole thread Raw
Responses Re: Using read_stream in index vacuum
List pgsql-hackers
Hi Andrey,

On Sat, Oct 19, 2024 at 5:39 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:
>
> Hi hackers!
>
> On a recent hacking workshop [0] Thomas mentioned that patches using new API would be welcomed.
> So I prototyped streamlining of B-tree vacuum for a discussion.
> When cleaning an index we must visit every index tuple, thus we uphold a special invariant:
> After checking a trailing block, it must be last according to subsequent RelationGetNumberOfBlocks(rel) call.
>
> This invariant does not allow us to completely replace block loop with streamlining. That's why streamlining is done
onlyfor number of blocks returned by first RelationGetNumberOfBlocks(rel) call. A tail is processed with regular
ReadBufferExtended().

I'm wondering why is the case, ISTM that we can do *p.current_blocknum
= scanblkno*
and *p.last_exclusive = num_pages* in each loop of the outer for?

+ /* We only streamline number of blocks that are know at the beginning */
know -> known

+ * However, we do not depent on it much, and in future ths
+ * expetation might change.

depent -> depend
ths -> this
expetation -> expectation

>
> Also, it's worth mentioning that we have to jump to the left blocks from a recently split pages. We also do it with
regularReadBufferExtended(). That's why signature btvacuumpage() now accepts a buffer, not a block number. 
>
>
> I've benchmarked the patch on my laptop (MacBook Air M3) with following workload:
> 1. Initialization
> create unlogged table x as select random() r from generate_series(1,1e7);
> create index on x(r);
> create index on x(r);
> create index on x(r);
> create index on x(r);
> create index on x(r);
> create index on x(r);
> create index on x(r);
> vacuum;
> 2. pgbench with 1 client
> insert into x select random() from generate_series(0,10) x;
> vacuum x;
>
> On my laptop I see ~3% increase in TPS of the the pgbench (~ from 101 to 104), but statistical noise is very
significant,bigger than performance change. Perhaps, a less noisy benchmark can be devised. 
>
> What do you think? If this approach seems worthwhile, I can adapt same technology to other AMs.
>

I think this is a use case where the read stream api fits very well, thanks.

>
> Best regards, Andrey Borodin.
>
> [0] https://rhaas.blogspot.com/2024/08/postgresql-hacking-workshop-september.html
>


--
Regards
Junwang Zhao



pgsql-hackers by date:

Previous
From: Dilip Kumar
Date:
Subject: Re: Make default subscription streaming option as Parallel
Next
From: Pavel Stehule
Date:
Subject: Re: Wrong security context for deferred triggers?