Re: Automatically sizing the IO worker pool - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Automatically sizing the IO worker pool
Date
Msg-id v4e2phxr3n22lv6skhtczbhy3xbyqwxuy7pj7cdh6afmhwhrqb@k6cpnb2kjw6c
Whole thread
In response to Re: Automatically sizing the IO worker pool  (Thomas Munro <thomas.munro@gmail.com>)
Responses Re: Automatically sizing the IO worker pool
List pgsql-hackers
Hi,

On 2026-04-08 14:09:16 +1200, Thomas Munro wrote:
> On Wed, Apr 8, 2026 at 12:30 PM Andres Freund <andres@anarazel.de> wrote:
> > On 2026-04-08 11:18:51 +1200, Thomas Munro wrote:
> > > >                 /* Choose one worker to wake for this batch. */
> > > >                 if (worker == -1)
> > > >                         worker = pgaio_worker_choose_idle(-1);
> > >
> > > Well I didn't want to wake a worker if we'd failed to enqueue
> > > anything.
> >
> > I think it's worth waking up workers if there are idle ones and the queue is
> > full?
> 
> True, but I prefer to test nsync because there is another reason to break:

I don't follow.  What I was proposing is after the conditional lock
acquisition succeeded.  So is your nsync == 0 check.

> +/*
> + * Tell postmaster that we think a new worker is needed.
> + */
> +static void
> +pgaio_worker_request_grow(void)
> +{
> +    /*
> +     * Suppress useless signaling if we already know that we're at the
> +     * maximum.  This uses an unlocked read of nworkers, but that's OK for
> +     * this heuristic purpose.
> +     */
> +    if (io_worker_control->nworkers < io_max_workers)
>      {
> -        io_worker_control->workers[i].latch = NULL;
> -        io_worker_control->workers[i].in_use = false;
> +        if (!io_worker_control->grow)
> +        {
> +            io_worker_control->grow = true;
> +            pg_memory_barrier();
> +
> +            /*
> +             * If the postmaster has already been signaled, don't do it again
> +             * until the postmaster clears this flag.  There is no point in
> +             * repeated signals if grow is being set and cleared repeatedly
> +             * while the postmaster is waiting for io_worker_launch_interval
> +             * (which it applies even to canceled requests).
> +             */
> +            if (!io_worker_control->grow_signal_sent)
> +            {
> +                io_worker_control->grow_signal_sent = true;
> +                pg_memory_barrier();
> +                SendPostmasterSignal(PMSIGNAL_IO_WORKER_GROW);
> +            }
> +        }
>      }
>  }


I'd probbly use early returns to make it a bit more readable.



> +static bool
> +pgaio_worker_can_timeout(void)
> +{
> +    PgAioWorkerSet workerset;
> +
> +    /* Serialize against pool size changes. */
> +    LWLockAcquire(AioWorkerControlLock, LW_SHARED);
> +    workerset = io_worker_control->workerset;
> +    LWLockRelease(AioWorkerControlLock);
> +
> +    if (MyIoWorkerId != pgaio_workerset_get_highest(&workerset))
> +        return false;
> +
> +    if (MyIoWorkerId < io_min_workers)
> +        return false;
> +
> +    return true;
> +}

I guess I'd move the < io_min_workers to earlier so that you don't acquire the
lock if that'll return false anyway.


Greetings,

Andres Freund



pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: Remove commented-out code in 026_overwrite_contrecord.pl
Next
From: Thomas Munro
Date:
Subject: Re: Automatically sizing the IO worker pool