On 10/7/25 14:08, Tomas Vondra wrote:
> ...
>>>>>> I think doing this kind of measurement via normal SQL query processing is
>>>>>> almost always going to have too much other influences. I'd measure using fio
>>>>>> or such instead. It'd be interesting to see fio numbers for your disks...
>>>>>>
>>>>>> fio --directory /srv/fio --size=8GiB --name test --invalidate=0 --bs=$((8*1024)) --rw read --buffered 0
--time_based=1--runtime=5 --ioengine pvsync --iodepth 1
>>>>>> vs --rw randread
>>>>>>
>>>>>> gives me 51k/11k for sequential/rand on one SSD and 92k/8.7k for another.
>>>>>>
>>>>>
>>>>> I can give it a try. But do we really want to strip "our" overhead with
>>>>> reading data?
>
> I got this on the two RAID devices (NVMe and SATA):
>
> NVMe: 83.5k / 15.8k
> SATA: 28.6k / 8.5k
>
> So the same ballpark / ratio as your test. Not surprising, really.
>
FWIW I do see about this number in iostat. There's a 500M test running
right now, and iostat reports this:
Device r/s rkB/s ... rareq-sz ... %util
md1 15273.10 143512.80 ... 9.40 ... 93.64
So it's not like we're issuing far fewer I/Os than the SSD can handle.
regards
--
Tomas Vondra