> On 22 Mar 2025, at 00:23, Melanie Plageman <melanieplageman@gmail.com> wrote:
>
>
> I've committed the btree and gist read stream users.
Cool! Thanks!
> I think we can
> come back to the test after feature freeze and make sure it is super
> solid.
+1.
> On 22 Mar 2025, at 02:54, Melanie Plageman <melanieplageman@gmail.com> wrote:
>
> On Fri, Mar 21, 2025 at 3:23 PM Melanie Plageman
> <melanieplageman@gmail.com> wrote:
>>
>> I've committed the btree and gist read stream users. I think we can
>> come back to the test after feature freeze and make sure it is super
>> solid.
>
> I've now committed the spgist vacuum user as well. I'll mark the CF
> entry as completed.
That's great! Thank you!
> I wonder if we should do GIN?
GIN vacuum is a logical scan. Back in 2017 I was starting to work on it, but made some mistakes, that were reverted by
fd83c83from the released version. And I decided to back off for some time. Perhaps, now I can implement physical scan
forGIN, that could benefit from read stream. But I doubt I will find committer for this in 19, let alone 18.
We can add some support for read stream for hashbulkdelete(): it's not that linear as B-tree, GiST and SP-GiST, it
scansonly beginning of hash buckets, but if buckets are small it might be more efficient.
>> Looking at the spgist read stream user, I see you didn't convert
>> spgprocesspending(). It seems like you could write a callback that
>> uses the posting list and streamify this as well.
>
> It's probably not worth it -- since we process the pending list for
> each page of the index.
My understanding is that pending lists should be small on real workloads.
Thank you!
Best regards, Andrey Borodin.