Re: Raid 10 chunksize - Mailing list pgsql-performance

From Mark Kirkwood
Subject Re: Raid 10 chunksize
Date
Msg-id 49CB07DE.4040102@paradise.net.nz
Whole thread Raw
In response to Re: Raid 10 chunksize  (Stef Telford <stef@ummon.com>)
Responses Re: Raid 10 chunksize  (Scott Carey <scott@richrelevance.com>)
List pgsql-performance
Stef Telford wrote:
>
> Hello Mark,
>     Okay, so, take all of this with a pinch of salt, but, I have the
> same config (pretty much) as you, with checkpoint_Segments raised to
> 192. The 'test' database server is Q8300, 8GB ram, 2 x 7200rpm SATA
> into motherboard which I then lvm stripped together; lvcreate -n
> data_lv -i 2 -I 64 mylv -L 60G (expandable under lvm2). That gives me
> a stripe size of 64. Running pgbench with the same scaling factors;
>
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 100
> number of clients: 24
> number of transactions per client: 12000
> number of transactions actually processed: 288000/288000
> tps = 1398.907206 (including connections establishing)
> tps = 1399.233785 (excluding connections establishing)
>
>     It's also running ext4dev, but, this is the 'playground' server,
> not the real iron (And I dread to do that on the real iron). In short,
> I think that chunksize/stripesize is killing you. Personally, I would
> go for 64 or 128 .. that's jst my 2c .. feel free to
> ignore/scorn/laugh as applicable ;)
>
>
Stef - I suspect that your (quite high) tps is because your SATA disks
are not honoring the fsync() request for each commit. SCSI/SAS disks
tend to by default flush their cache at fsync - ATA/SATA tend not to.
Some filesystems (e.g xfs) will try to work around this with write
barrier support, but it depends on the disk firmware.

Thanks for your reply!

Mark

pgsql-performance by date:

Previous
From: Mark Kirkwood
Date:
Subject: Re: Raid 10 chunksize
Next
From: Mark Kirkwood
Date:
Subject: Re: Raid 10 chunksize