Re: WAL prefetch - Mailing list pgsql-hackers

From Konstantin Knizhnik
Subject Re: WAL prefetch
Date
Msg-id 55200c7c-f189-5196-56d6-9001d92a6ea9@postgrespro.ru
Whole thread Raw
In response to Re: WAL prefetch  (Ants Aasma <ants.aasma@eesti.ee>)
Responses Re: WAL prefetch
List pgsql-hackers


On 19.06.2018 16:57, Ants Aasma wrote:
On Tue, Jun 19, 2018 at 4:04 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:
Right. My point is that while spawning bgworkers probably helps, I don't
expect it to be enough to fill the I/O queues on modern storage systems.
Even if you start say 16 prefetch bgworkers, that's not going to be
enough for large arrays or SSDs. Those typically need way more than 16
requests in the queue.

Consider for example [1] from 2014 where Merlin reported how S3500
(Intel SATA SSD) behaves with different effective_io_concurrency values:

[1]
https://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com

Clearly, you need to prefetch 32/64 blocks or so. Consider you may have
multiple such devices in a single RAID array, and that this device is
from 2014 (and newer flash devices likely need even deeper queues).'

For reference, a typical datacenter SSD needs a queue depth of 128 to saturate a single device. [1] Multiply that appropriately for RAID arrays.So

How it is related with results for S3500  where this is almost now performance improvement for effective_io_concurrency >8?
Starting 128 or more workers for performing prefetch is definitely not acceptable...



-- 
Konstantin Knizhnik
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company 

pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: Excessive CPU usage in StandbyReleaseLocks()
Next
From: Tom Lane
Date:
Subject: Fast default stuff versus pg_upgrade