Tuning threshold for BAS_BULKREAD (large tables) - Mailing list pgsql-general

From Jamison, Kirk
Subject Tuning threshold for BAS_BULKREAD (large tables)
Date
Msg-id D09B13F772D2274BB348A310EE3027C6412928@g01jpexmbkw24
Whole thread Raw
Responses Re: Tuning threshold for BAS_BULKREAD (large tables)  (Ron <ronljohnsonjr@gmail.com>)
List pgsql-general

Hi,

 

I have a source code-related question on BufferAccessStrategyType BAS_BULKREAD.

Currently, this access method is set internally to cache tables larger than 1/4 of shared_buffers.

src/backend/access/heap/heapam.c:initscan()

     if (!RelationUsesLocalBuffers(scan->rs_rd) &&

         scan->rs_nblocks > NBuffers / 4)

        ...

         /* During a rescan, keep the previous strategy object. */

         if (scan->rs_strategy == NULL)

             scan->rs_strategy = GetAccessStrategy(BAS_BULKREAD);

 

Users can tune their shared_buffers size, but not able to tune this component.

I'm just wondering how it affects the current workload when the table size is larger than the database.

Does it really cache the large tables in shared buffers? How does it affect other buffers/pages stored in the shared buffers?

 

Oracle also has a quite-related user parameter that allocates space for large tables in the buffer cache.

https://docs.oracle.com/database/121/VLDBG/GUID-A553169D-C6CD-443E-88C3-B746D5E32923.htm#VLDBG14145

 

I want to ask how has PostgreSQL optimized this with synchronized sequential scans, etc.?

If it's beneficial, I'm wondering if it would be helpful also in Postgres for users to tune it instead of the hardcoded threshold (Nbuffers / 4)?

pgsql-general by date:

Previous
From: Dagan McGregor
Date:
Subject: Re: Manage PostgreSQL Database for GITLAB Application?
Next
From: Ron
Date:
Subject: Re: Tuning threshold for BAS_BULKREAD (large tables)