Re: BitmapHeapScan streaming read user and prelim refactoring - Mailing list pgsql-hackers

From Melanie Plageman
Subject Re: BitmapHeapScan streaming read user and prelim refactoring
Date
Msg-id CAAKRu_bNoYcRPgymvam_XpxkQW+giiOAh1x=JNZ2ESZ_d53PsQ@mail.gmail.com
Whole thread Raw
In response to Re: BitmapHeapScan streaming read user and prelim refactoring  (Jakub Wartak <jakub.wartak@enterprisedb.com>)
Responses Re: BitmapHeapScan streaming read user and prelim refactoring
List pgsql-hackers
On Mon, Mar 17, 2025 at 3:44 AM Jakub Wartak
<jakub.wartak@enterprisedb.com> wrote:
>
> dunno, I've just asked if it isn't suspicious to anyone except me else
> that e_io_c > m_io_c rather than e_io_c <= m_io_c. My understanding
> was always that to tune max IO queue depth you would do this:
> a. up to N backends (up to max_connections; usually much lower) * e_io_c
> b. autovacuum_max_workers * m_io_c
> c. just one (standby/recovering) * m_io_c
>
> The thing (for me) is: if we are allowing for much higher IOPS "a"
> scenario, then why standby cannot use just the same (if not higher)
> IOPS for prefetching in "c" scenario. After all, it is a much more
> critical and sensitive thing (lag).

This sounds quite reasonable to me. Given I just changed the default
effective_io_concurrency to 16, what value would you say is reasonable
for maintenance_io_concurrency? I based the eic change off of
experimentation -- seeing where the benefits flatlined for a certain
class of query on a couple different kinds of machines with different
IO latencies. I don't feel strongly that we need to be as rigorous for
maintenance_io_concurrency, but I'm also not sure 160 seems reasonable
(which would be the same ratio as before).

- Melanie



pgsql-hackers by date:

Previous
From: Pavel Stehule
Date:
Subject: Re: Re: proposal: schema variables
Next
From: Jeff Davis
Date:
Subject: Re: Update Unicode data to Unicode 16.0.0