Re: extremly low memory usage - Mailing list pgsql-performance
From | John A Meinel |
---|---|
Subject | Re: extremly low memory usage |
Date | |
Msg-id | 4307E7A4.7090300@arbash-meinel.com Whole thread Raw |
In response to | Re: extremly low memory usage (Ron <rjpeace@earthlink.net>) |
Responses |
Re: extremly low memory usage
|
List | pgsql-performance |
Ron wrote: > At 02:53 PM 8/20/2005, Jeremiah Jahn wrote: > >> On Fri, 2005-08-19 at 16:03 -0500, John A Meinel wrote: >> > Jeremiah Jahn wrote: >> > > On Fri, 2005-08-19 at 12:18 -0500, John A Meinel wrote: >> > > >> <snip> >> > >> > > it's cached alright. I'm getting a read rate of about 150MB/sec. I >> would >> > > have thought is would be faster with my raid setup. I think I'm >> going to >> > > scrap the whole thing and get rid of LVM. I'll just do a straight >> ext3 >> > > system. Maybe that will help. Still trying to get suggestions for a >> > > stripe size. >> > > Well, since you can get a read of the RAID at 150MB/s, that means that it is actual I/O speed. It may not be cached in RAM. Perhaps you could try the same test, only using say 1G, which should be cached. >> > >> > I don't think 150MB/s is out of the realm for a 14 drive array. >> > How fast is time dd if=/dev/zero of=testfile bs=8192 count=1000000 >> > >> time dd if=/dev/zero of=testfile bs=8192 count=1000000 >> 1000000+0 records in >> 1000000+0 records out >> >> real 1m24.248s >> user 0m0.381s >> sys 0m33.028s >> >> >> > (That should create a 8GB file, which is too big to cache everything) >> > And then how fast is: >> > time dd if=testfile of=/dev/null bs=8192 count=1000000 >> >> time dd if=testfile of=/dev/null bs=8192 count=1000000 >> 1000000+0 records in >> 1000000+0 records out >> >> real 0m54.139s >> user 0m0.326s >> sys 0m8.916s >> >> >> and on a second run: >> >> real 0m55.667s >> user 0m0.341s >> sys 0m9.013s >> >> >> > >> > That should give you a semi-decent way of measuring how fast the RAID >> > system is, since it should be too big to cache in ram. >> >> about 150MB/Sec. Is there no better way to make this go faster...? I'm actually curious about PCI bus saturation at this point. Old 32-bit 33MHz pci could only push 1Gbit = 100MB/s. Now, I'm guessing that this is a higher performance system. But I'm really surprised that your write speed is that close to your read speed. (100MB/s write, 150MB/s read). > > Assuming these are U320 15Krpm 147GB HDs, a RAID 10 array of 14 of them > doing raw sequential IO like this should be capable of at > ~7*75MB/s= 525MB/s using Seagate Cheetah 15K.4's, ~7*79MB/s= 553MB/s > if using Fujitsu MAU's, and ~7*86MB/s= 602MB/s if using Maxtor Atlas 15K > II's to devices external to the RAID array. I know I thought these were SATA drives, over 2 controllers. I could be completely wrong, though. > > _IF_ the controller setup is high powered enough to keep that kind of IO > rate up. This will require a controller or controllers providing dual > channel U320 bandwidth externally and quad channel U320 bandwidth > internally. IOW, it needs a controller or controllers talking 64b > 133MHz PCI-X, reasonably fast DSP/CPU units, and probably a decent sized > IO buffer as well. > > AFAICT, the Dell PERC4 controllers use various flavors of the LSI Logic > MegaRAID controllers. What I don't know is which exact one yours is, > nor do I know if it (or any of the MegaRAID controllers) are high > powered enough. > > Talk to your HW supplier to make sure you have controllers adequate to > your HD's. > > ...and yes, your average access time will be in the 5.5ms - 6ms range > when doing a physical seek. > Even with RAID, you want to minimize seeks and maximize sequential IO > when accessing them. > Best to not go to HD at all ;-) Well, certainly, if you can get more into RAM, you're always better off. For writing, a battery-backed write cache, and for reading lots of system RAM. > > Hope this helps, > Ron Peacetree > John =:->
Attachment
pgsql-performance by date: