Thread: Best use of second controller with faster disks?
Configuration OS: FreeBSD 6.1 Stable Postgresql: 8.1.4 RAID card 1 with 8 drives. 7200 RPM SATA RAID10 RAID card 2 with 4 drives. 10K RPM SATA RAID10 Besides having pg_xlog in the 10K RPM drives what else can I do to best use those drives other than putting some data in them? Iostat shows the drives getting used very little, even during constant updates and vacuum. Some of the postgresl.conf settings that may be relevant. wal_buffers = 64 checkpoint_segments = 64 If nothing else I will start to put index files in the 10K RPM RAID. As for the version of postgreql.. we are likely getting a second machine, break off some of the data, change programs to read data from both and at some point when there is little data in the 8.1.4, upgrade the 8.1.4 machine. The new machine will have 8.2.4 We have a lot of historical data that never changes which is the main driving factor behind looking to split the database into current and historical.
On Jun 11, 2007, at 9:14 PM, Francisco Reyes wrote: > RAID card 1 with 8 drives. 7200 RPM SATA RAID10 > RAID card 2 with 4 drives. 10K RPM SATA RAID10 > what raid card have you got? i'm playing with an external enclosure which has an areca sata raid in it and connects to the host via fibre channel. it is wicked fast, and supports a RAID6 which seems to be as fast as the RAID10 in my initial testing on this unit. What drives are you booting from? If you're booting from the 4-drive RAID10, perhaps split that into a pair of RAID1's and boot from one and use the other as the pg log disk. however, I must say that with my 16 disk array, peeling the log off the main volume actually slowed it down a bit. I think that the raid card is just so fast at doing the RAID6 computations and having the striping is a big gain over the dedicated RAID1 for the log. Right now I'm testing an 8-disk RAID6 configuration on the same device; it seems slower than the 16-disk RAID6, but I haven't yet tried 8-disk RAID10 with dedicated log yet. > Besides having pg_xlog in the 10K RPM drives what else can I do to > best use those drives other than putting some data in them? > > Iostat shows the drives getting used very little, even during > constant updates and vacuum. > > Some of the postgresl.conf settings that may be relevant. > wal_buffers = 64 > checkpoint_segments = 64 i'd bump checkpoint_segements up to 256 given the amount of disk you've got dedicated to it. be sure to increase checkpoint timeout too. And if you can move to 6.2 FreeBSD you should pick up some speed on the network layer and possibly the disk I/O.
Vivek Khera writes: > what raid card have you got? 2 3ware cards. I believe both are 9550SX > i'm playing with an external enclosure > which has an areca sata raid in it and connects to the host via fibre > channel. What is the OS? FreeBSD? One of the reasons I stick with 3ware is that it is well supported in FreeBSD and has a pretty decent management program > it is wicked fast, and supports a RAID6 which seems to be > as fast as the RAID10 in my initial testing on this unit. My next "large" machine I am also leaning towards RAID6. The space different is just too big to ignore. 3ware recommends RAID6 for 5+ drives. > What drives are you booting from? Booting from the 8 drive raid. > If you're booting from the 4-drive > RAID10, perhaps split that into a pair of RAID1's and boot from one > and use the other as the pg log disk. Maybe for the next machine. > however, I must say that with my 16 disk array, peeling the log off > the main volume actually slowed it down a bit. I think that the raid > card is just so fast at doing the RAID6 computations and having the > striping is a big gain over the dedicated RAID1 for the log. Could be. Seems like RAID6 is supposed to be a good balance between performance and available space. > Right now I'm testing an 8-disk RAID6 configuration on the same > device; it seems slower than the 16-disk RAID6, but I haven't yet > tried 8-disk RAID10 with dedicated log yet. Is all this within the same controller? > i'd bump checkpoint_segements up to 256 given the amount of disk > you've got dedicated to it. be sure to increase checkpoint timeout too. Thanks. Will try that.
On Jun 12, 2007, at 8:33 PM, Francisco Reyes wrote: > Vivek Khera writes: > >> what raid card have you got? > > 2 3ware cards. > I believe both are 9550SX >> i'm playing with an external enclosure which has an areca sata >> raid in it and connects to the host via fibre channel. > > What is the OS? FreeBSD? FreeBSD, indeed. The vendor, Partners Data Systems, did a wonderful job ensuring that everything integrated well to the point of talking with various FreeBSD developers, LSI engineers, etc., and sent me a fully tested system end-to-end with a Sun X4100 M2, LSI 4Gb Fibre card, and their RAID array, with FreeBSD installed already. I can't recommend them enough -- if you need a high-end RAID system for FreeBSD (or other OS, I suppose) do check them out. >> Right now I'm testing an 8-disk RAID6 configuration on the same >> device; it seems slower than the 16-disk RAID6, but I haven't yet >> tried 8-disk RAID10 with dedicated log yet. > > Is all this within the same controller? Yes, the system is in testing right now, so I'm playing with all sort of different disk configurations and it seems that the 16-disk RAID6 is the winner so far. The next best was the 14-disk RAID6 + 2 disk RAID1 for log. I have separate disks built-in to the system for boot.
Vivek Khera writes: > FreeBSD, indeed. The vendor, Partners Data Systems, did a wonderful This one? http://www.partnersdata.com > job ensuring that everything integrated well to the point of talking > with various FreeBSD developers, LSI engineers, etc., and sent me a > fully tested system end-to-end with a Sun X4100 M2, LSI 4Gb Fibre > card, and their RAID array, with FreeBSD installed already. Is there a management program in FreeBSD for the Areca card? So I understand the setup you are describing.. Machine has Areca controller Connects to external enclosure Enclosure has LSI controller > I have separate disks built-in to the system for boot. How did you get FreeBSD to newfs such a large setup? newfs -s /dev/raw-disk? What are the speed/size of the disks? 7K rpm?
On Jun 13, 2007, at 10:36 PM, Francisco Reyes wrote: >> FreeBSD, indeed. The vendor, Partners Data Systems, did a wonderful > > This one? > http://www.partnersdata.com > that's the one. >> job ensuring that everything integrated well to the point of >> talking with various FreeBSD developers, LSI engineers, etc., and >> sent me a fully tested system end-to-end with a Sun X4100 M2, LSI >> 4Gb Fibre card, and their RAID array, with FreeBSD installed >> already. > > Is there a management program in FreeBSD for the Areca card? > So I understand the setup you are describing.. > Machine has Areca controller > Connects to external enclosure > Enclosure has LSI controller In the past I've had systems with RAID cards: LSI and Adaptec. The LSI 320-2X is the fastest one I've ever had. The adaptec ones suck because there is no management software for them on the newer cards for freebsd, especially under amd64. The system I'm working on now is thus: Sun X4100 M2 with an LSI 4Gb fibre channel card connected to an external self-contained RAID enclosure, the Triton RAID from Partners Data. The Triton unit has in it an Areca SATA RAID controller and 16 disks. >> I have separate disks built-in to the system for boot. > > How did you get FreeBSD to newfs such a large setup? > newfs -s /dev/raw-disk? It is only 2Tb raw, 1.7Tb formatted :-) I just used sysinstall to run fdisk, label, and newfs for me. Since it is just postgres data, no file will ever be larger than 1Gb I didn't need to make any adjustments to the newfs parameters. > > What are the speed/size of the disks? > 7K rpm? I splurged for the 10kRPM drives, even though they are smaller 150Gb each.
Vivek Khera writes: > no file will ever be larger than 1Gb I didn't need to make any > adjustments to the newfs parameters. You should consider using "newfs -i 65536" for partitions to be used for postgresql. You will get more usable space and will still have lots of free inodes. For my next postgresql server I am likely going to do "newfs -i 262144" On my current primary DB I have 2049 inodes in use and 3,539,389 free. That was with newfs -i 65536.