Re: Prefetch the next tuple's memory during seqscans - Mailing list pgsql-hackers

From sirisha chamarthi
Subject Re: Prefetch the next tuple's memory during seqscans
Date
Msg-id CAKrAKeUQ_2bqmCxO6+dUjO+adrrcnZn-+Nz5O8ok0Yb2h=LS_w@mail.gmail.com
Whole thread Raw
In response to Re: Prefetch the next tuple's memory during seqscans  (David Rowley <dgrowleyml@gmail.com>)
Responses Re: Prefetch the next tuple's memory during seqscans
List pgsql-hackers


On Tue, Nov 22, 2022 at 11:44 PM David Rowley <dgrowleyml@gmail.com> wrote:
On Wed, 23 Nov 2022 at 20:29, sirisha chamarthi
<sirichamarthi22@gmail.com> wrote:
> I ran your test1 exactly like your setup except the row count is 3000000 (with 13275 blocks). Shared_buffers is 128MB and the hardware configuration details at the bottom of the mail. It appears Master + 0001 + 0005 regressed compared to master slightly .

Thank you for running these tests.

Can you share if the plans used for these queries was a parallel plan?
I had set max_parallel_workers_per_gather to 0 to remove the
additional variability from parallel query.

Also, 13275 blocks is 104MBs, does EXPLAIN (ANALYZE, BUFFERS) indicate
that all pages were in shared buffers? I used pg_prewarm() to ensure
they were so that the runs were consistent.

I reran the test with setting max_parallel_workers_per_gather = 0 and with pg_prewarm. Appears I missed some step while testing on the master, thanks for sharing the details. New numbers show master has higher latency than Master + 0001 + 0005.

Master

Before vacuum:
latency average = 452.881 ms

After vacuum:
latency average = 393.880 ms

Master + 0001 + 0005
Before vacuum:
latency average = 441.832 ms

After vacuum:
latency average = 369.591 ms

pgsql-hackers by date:

Previous
From: Justin Pryzby
Date:
Subject: Re: New docs chapter on Transaction Management and related changes
Next
From: Peter Smith
Date:
Subject: Re: [DOCS] Stats views and functions not in order?