Thread: Raid Chunk Sizes for DSS type DB
It's not an optimal setup but since I only have 3x500G drives to play with, I can't build a Raid10, so I'm going for Raid5 to test out capability before I decide on Raid5 vs Raid1 tradeoff. (Raid1 = No Fault tolerance since 3 drives) Anyway.. I'm trying to figure out the chunk size for the raid. I'm using 4k chunks since I'm reading that for DSS type queries, lots of Large Reads, I should be using small chunks. [1] and I've aligned the disks per [2] and my stride will 3 for ext3 mdadm --create --verbose /dev/md1 --level=5 --raid-devices=3 --chunk=4 /dev/sdd1 /dev/sde1 /dev/sdf1 mkfs.ext3 -E stride=3 -O dir_index /dev/md1 mount /dev/md1 /pgsql/ -o noatime,data=writeback [1] http://wiki.centos.org/HowTos/Disk_Optimization [2] http://www.pythian.com/blogs/411/aligning-asm-disks-on-linux Just wondering if there's any suggestions/comments on this from the PG ppl here. Thanks for any/all comments.
On Tue, 30 Oct 2007 09:42:37 +0800 Ow Mun Heng <Ow.Mun.Heng@wdc.com> wrote: > It's not an optimal setup but since I only have 3x500G drives to play > with, I can't build a Raid10, so I'm going for Raid5 to test out > capability before I decide on Raid5 vs Raid1 tradeoff. (Raid1 = No > Fault tolerance since 3 drives) > Uhhh RAID 1 is your best bet. You get fault tolerance (mirrored) plus you get a hot spare (3 drives). RAID 5 on the other hand will be very expensive on writes. Joshua D. Drake > Anyway.. I'm trying to figure out the chunk size for the raid. I'm > using 4k chunks since I'm reading that for DSS type queries, lots of > Large Reads, I should be using small chunks. [1] and I've aligned the > disks per [2] > > and my stride will 3 for ext3 > > mdadm --create --verbose /dev/md1 --level=5 --raid-devices=3 > --chunk=4 /dev/sdd1 /dev/sde1 /dev/sdf1 mkfs.ext3 -E stride=3 -O > dir_index /dev/md1 mount /dev/md1 /pgsql/ -o noatime,data=writeback > > > [1] http://wiki.centos.org/HowTos/Disk_Optimization > [2] http://www.pythian.com/blogs/411/aligning-asm-disks-on-linux > > Just wondering if there's any suggestions/comments on this from the > PG ppl here. > > Thanks for any/all comments. > > > ---------------------------(end of > broadcast)--------------------------- TIP 5: don't forget to increase > your free space map settings > -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240 PostgreSQL solutions since 1997 http://www.commandprompt.com/ UNIQUE NOT NULL Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/
Attachment
On 10/29/07, Joshua D. Drake <jd@commandprompt.com> wrote: > On Tue, 30 Oct 2007 09:42:37 +0800 > Ow Mun Heng <Ow.Mun.Heng@wdc.com> wrote: > > > It's not an optimal setup but since I only have 3x500G drives to play > > with, I can't build a Raid10, so I'm going for Raid5 to test out > > capability before I decide on Raid5 vs Raid1 tradeoff. (Raid1 = No > > Fault tolerance since 3 drives) > > > > Uhhh RAID 1 is your best bet. You get fault tolerance (mirrored) plus > you get a hot spare (3 drives). > > RAID 5 on the other hand will be very expensive on writes. I agree. Note that at least in linux, you can have >2 disks in a mirror. makes reads faster, writes usually not affected too negatvely
--- On Mon, 10/29/07, Ow Mun Heng <Ow.Mun.Heng@wdc.com> wrote: > (Raid1 = No Fault tolerance since 3 drives) Raid1 with three drives will have fault tolerance. You will have three disks with the same image. This is triple redundancy. This could greatly improve select performance. Having said this, I've used software raid5 and am currently using raid10 implemented from PCI ide cards but have had dataloss errors occur with both setups. I am not sure if the problem is in the drives, the pci cards, or the software raidsetup. (Thank goodness that this is my toy computer.) However, I've used RAID1 with great success for my OS partitions and haven't had any problems of the last couple of years. Regards, Richard Broersma
On 30.10.2007 03:11, Joshua D. Drake wrote: > Ow Mun Heng <Ow.Mun.Heng@wdc.com> wrote: > >> It's not an optimal setup but since I only have 3x500G drives to play >> with, I can't build a Raid10 > > Uhhh RAID 1 is your best bet. You get fault tolerance (mirrored) plus > you get a hot spare (3 drives). This is not true with Linux MD RAID. It might sound scary to most people, but you _can_ have a RAID 10 with only 3 drives. http://en.wikipedia.org/wiki/Non-standard_RAID_levels#Linux_MD_RAID_10 Another thing you want to do is to check if the MD device you created supports barriers. I know MD RAID 1 does, MD RAID 5 does not, I don't know about MD RAID 10. If it does not, make sure you have an UPS. -- Regards, Hannes Dorbath